Deconvolution

wrichards

Well-known member
I'm trying to learn how to use the Deconvolution process properly. I've read the LightVortex tutorial and the chapters in the Mastering PixInsight book - several times - and cannot seem to get a good outcome. The result is always a major degradation of the original image.

I've been practicing on a stacked image of the Leo Triplet I took recently, strictly in the linear space. The stretched image looks like this for reference (I'm not using the stretched image with Decon):
Leo Triplet.JPG


This is the mask I created for the Deconvolution process:
DeconMask.JPG


This is the Star Mask I'm using for the Local deringing:
StarMask.JPG


I generated what I think is a good PSF model:
1617927329840.png


Here are the Deconvolution settings:
1617928051696.png


And this is the result of a preview box of the Hamburger Galaxy:
Result.JPG


Clearly, I am doing something terribly wrong but I don't know what. Is it my masks? Deconvolution settings? Any advice would be greatly appreciated.
 
Yes. Change the Global Dark setting to .001. That .1 is one of the weirdest defaults in PixInsight. Then try again. :)
 
global dark ringing control way too high... try cutting it in half, try again... if still bad cut in half again and downwards, etc.

crossed posts with jpetruzzi... 0.001 might be too low so it makes sense to try to converge on it.
 
As a first hint, start without deringing, once found some values that generate something interesting address the ringing issues adding your deringing mask but set the Global Dark to very lower values than 0.1! it's usually very high for my images.

Start with 0.0001 and increase by 1 magnitude if nothing happens, until you start seeing that dark areas around stars get brighter. At that point fine tune the Global Dark value.
 
global dark ringing control way too high... try cutting it in half, try again... if still bad cut in half again and downwards, etc.

crossed posts with jpetruzzi... 0.001 might be too low so it makes sense to try to converge on it.
True. I like to start light and iterate my way in. It's always a balancing act
 
ah - i've always approached it from the high side instead of the low side but it looks like at least you and robyx do it the other way... all good tho.

rob
 
In addition to the previous recommendations, judging from your screenshot you appear to be using a parametric psf model instead of the external psf that you seem to have created?

/Ralf
 
Ralf - Good catch, but that was just a screenshot goof. I was definitely using the custom-made PSF when I was doing the Decon. The screenshot just didn't have the correct option selected.

That Global Dark setting was definitely the problem. Somehow I missed this line in the LightVortex tutorial: "we need to tweak the Global dark setting under Deringing. The default setting of 0.1000 is generally too aggressive and values between 0.0100 and 0.0500 are generally best. I set mine to 0.0100." My bad on that.

I start to see artifacts on larger stars and the edges of the galaxies if I set my Global Dark value above 0.005 (even 0.0055). Is that about right or am I doing something wrong with my masks?

After deconvolution, I can barely see any difference in the image. Am I doing it incorrectly or is this just a bad image for application of Deconvolution since there is no nebulosity and the galaxies are rather bright and blurry?

Another question - When creating the PSF, the Mastering PixInsight book says to try to select stars that have a Moffat profile. The LightVortex tutorial says to select stars with amplitudes in the range of 0.25 to 0.75. But all my stars with a Moffat profile are dimmer and well below the range suggested by the LightVortex tutorial. All stars with an amplitude in the 0.25-0.75 range have a Gaussion profile. So should I aim for Moffat only (smaller stars with lower amplitudes), Gaussian only (moderately sized stars that are in that amplitude range), or a mixture?
 
try to select stars that have a Moffat profile

This makes no sense. No point spread model function is better or worse per se; one has to use the model functions that better fit the data. For this purpose the Auto PSF model option of the DynamicPSF tool is usually the best selection. To evaluate the reliability of PSF fits, use MAD values. These are robust and accurate estimates of the difference between each fitted PSF model and the actual sampled data. Always try to use stars with comparatively low MAD values.

We recently have added variable shape point spread model functions (VarShape) that can yield very accurate results. In many cases, if you enable the Automatic VarShape fits option in Auto mode, you'll see that variable shape functions tend to provide the lowest MAD estimates.

select stars with amplitudes in the range of 0.25 to 0.75

This can make some sense, or not, depending on fitted PSF models. Always avoid stars with peak values close to one, that is, don't use saturated stars or very bright stars. The reason is that fitting a PSF for a saturated star is problematic because the fitting routines cannot use well sampled data, which leads to inaccurate results that are not representative of the actual PSF of the image. For the same reason, always try to avoid too dim stars, where the signal-to-noise ratio is very low. Use moderately bright stars that are 'typical' of the image being measured.

In general, one does not need hundreds or thousands of PSF fits to characterize the true PSF of an image. A few (say less than 50) fits on carefully selected stars can provide optimal results in most cases.
 
Am I doing it incorrectly or is this just a bad image for application of Deconvolution

From my experience, the answer to this question is that deconvolution is not applicable in more than 80% of cases. Deconvolution requires high SNR data to provide meaningful results. When you apply deconvolution to a good image, you immediately know it's working well because it is easy to apply and good results are easy to obtain. This happens when the implemented algorithms are working within expected parameters.
 
Just out of curiosity, what would be the composition of the 20% that you think would be good candidates for Deconvolution?

I ask because I just tried it on a clean, high quality image of M42 and the results are always worse than the original, no matter how many iterations I try, what Global Dark setting I use, or how I do the masking. I'm sure part of the problem is my lack of experience with Deconvolution, but it seems that all of the reputable sources (LightVortex, Mastering PixInsight) either lack information or perhaps provide some bad advice, as noted above.
 
Hi Bill,

Just out of curiosity, what would be the composition of the 20% that you think would be good candidates for Deconvolution?

It is just a matter of signal-to-noise ratio in the data. You need very high SNR to apply deconvolution in a significant way. Deconvolution is an ill-posed problem. Although our implementation works by separating the signal and noise components using wavelets and other sophisticated algorithms, these algorithms have limitations. I cannot give you this information numerically or quantitatively. Each image poses different problems and has different requirements.

I ask because I just tried it on a clean, high quality image of M42 and the results are always worse than the original

Deconvolution is not an image sharpening tool. This is a very important fact that cannot be overemphasized. It is a very special and specific task whose application must always have a physical justification. It must be applied to linear data with an accurate model of the PSF of the image. It must also be applied conservatively, and our implementation usually requires some trial/error work to fine tune regularization (noise reduction) and deringing parameters to achieve an optimal result. Deringing, that is, avoiding or masking the Gibbs phenomenon around singularities, such as stars and other jump discontinuities, is a very delicate task that often cannot be implemented to match our expectations. One usually has to find an acceptable compromise.
 
OK, thanks again for the information, Juan. This is good to know.

I've seen a number of places (including here in the LightVortex tutorials) that suggest using it on a non-linear image to sharpen it. But other places state that it should only be used on linear images, so there is a lot of conflicting information out there about Deconvolution.
 
I have an image of M8 and M20 that consists of over 5 hours of total exposure time with sub arc-second tracking. I think the image quality (SNR) is pretty good and this might be a good image for sharpening using Deconvolution, so I thought I would give it a try. It took several iterations to get a good star mask to protect all the stars (even those within the nebulas) but I finally generated a mask that protects everything except the nebulosity.

I also spent a lot of time generating a good PSF and am using that in the Deconvolution process along with a good star mask for local deringing.

When I applied Deconvolution, most areas of the two nebulas improved (got sharper), but the fringe areas got worse (see some small preview window snapshots below). I've tried changing the Global Dark value from 0.001 to 0.01, the local amount from 0.3 to 1.0, iterations from 10-50, noise thresholds, and noise reduction levels. Nothing is improving those fringe areas. Is there something I'm overlooking or is this just another one of those images that isn't suitable for sharpening using Deconvolution?
 

Attachments

  • Decon Settings.jpg
    Decon Settings.jpg
    173.6 KB · Views: 101
  • Preview-Orig.JPG
    Preview-Orig.JPG
    76.6 KB · Views: 116
  • Preview-Orig-Mask.JPG
    Preview-Orig-Mask.JPG
    84.8 KB · Views: 117
  • Preview-Decon.JPG
    Preview-Decon.JPG
    85.4 KB · Views: 96
Here are the "before" and "after" images of just the Trifid Nebula (~10% of the total image).
 

Attachments

  • Trifid-Before.JPG
    Trifid-Before.JPG
    217.9 KB · Views: 151
  • Trifid-After.JPG
    Trifid-After.JPG
    253.5 KB · Views: 153
I have an image of M8 and M20 that consists of over 5 hours of total exposure time with sub arc-second tracking. I think the image quality (SNR) is pretty good and this might be a good image for sharpening using Deconvolution, so I thought I would give it a try. It took several iterations to get a good star mask to protect all the stars (even those within the nebulas) but I finally generated a mask that protects everything except the nebulosity.

I also spent a lot of time generating a good PSF and am using that in the Deconvolution process along with a good star mask for local deringing.

When I applied Deconvolution, most areas of the two nebulas improved (got sharper), but the fringe areas got worse (see some small preview window snapshots below). I've tried changing the Global Dark value from 0.001 to 0.01, the local amount from 0.3 to 1.0, iterations from 10-50, noise thresholds, and noise reduction levels. Nothing is improving those fringe areas. Is there something I'm overlooking or is this just another one of those images that isn't suitable for sharpening using Deconvolution?

The bright artifacts are being caused by too high of a global dark deringing threshold. Try lowering it.
 
Back
Top