Deconvolution Challenge

Carlos Milovic

Well-known member
Today we start a new challenge, to test our new deconvolution tool against the old ones. So, we ask you to give your best shot to the test images that we'll post here. The best results are going to be included here. Please try to use all the algorithms avalaible right now (Constrained Least Squares or Wiener filters in RestorationFilter and Regularized versions of Richardson-Lucy and Van Cittern in Deconvolution, specially).

Test 1: Bigradient image - without noise
This is a simple test, without noise to complicate things. Do your best to get sharp edges and as less ringing artifacts as you can.
Image: http://pteam.pixinsight.com/decchall/bigradient_conv.tif
PSF: http://pteam.pixinsight.com/decchall/psf.tif


Test 2: Bigradient image - with gaussian noise
To make things more difficult, let's add a bit of noise to the image. In this case, 10% of gaussian noise. Please, do not use noise reduction algorithm here.
Image: http://pteam.pixinsight.com/decchall/bigradient_convnoise.tif
PSF: http://pteam.pixinsight.com/decchall/psf.tif


Good luck, and thanks for giving it a try!!!
 
Hi Carlos, I think that it is almost impossible with the actual tools to recover the original information of the image  >:D

Without noise I have applied a convolution to reduce the lines generated for applying Van Cittert in Deconvolution without regularization.
With noise I also used Van Cittert but this time with regularization.

Waiting for the new tool!!!  :)

Saludos, Alejandro.
 

Attachments

  • bigradient_conv.png
    bigradient_conv.png
    2.6 KB · Views: 1,278
  • bigradient_convnoise.png
    bigradient_convnoise.png
    63 KB · Views: 1,287
Thank you for your results, Alejandro! It is not much surprise that Van Cittern performed better here than other algorithms (RRL and CLS/W). If we remember correctly, Van Cittern outperforms them in lunar images, which are quite similar to this test: strong edges and smooth, quite constant features.

The new deconvolution tool, TGVRestoration is right now on the optimization stage. Meanwhile, we are trying to know how the parameters behave and how it stands against our previous tool. Just as a token of what is comming, here is the result of the first challenge (without noise). Please, keep the results coming. I'll post the result of the second challenge in a couple of days.

bigradient_t1_tgvr.png


As you can see, there are only small artifacts at the corners of the square.
 
It looks promising !!!!!  :tongue:


Thanks for these exciting new features >:D

Please optimize quickly ! Cannot wait 3 months
:laugh:
 
Hi Carlos, the only way of not to say all I should to represent how excited I am with this new tool is with a Spock-like answer. "Fascinating"

Saludos, Alejandro.
 
Carlos-
As someone who has found the deconvolution process in PI to be the perhaps the most difficult to master, I can hope that the new deconvolution will be a little more bullet-proof for the more pedestrian user.  I've never been able to get consistent results with it except for a few times where I hit the lottery on my parameter and mask settings. Most of the time I am unsuccessful and, if I deconvolve at all, have to use AIP4WIN Lucy-Richardson which, if nothing else, operates consistently and produces results for any image I give it. This is the last processing step I've never been able to do (consistently) in PI.
-Jeff
 
Here is the result of the second challenge, with TGVRestoration:

bigradient_t2_tgvr.png


There are more artifacts left than in the first challenge, but IMO this new tool outperforms quite well the other algorithms.


Jeff, I had the same feeling about deconvolution in PI. It was really hard, specially setting the deringing parameters. This new deconvolution is not artifact or ringing free, but so far yields much better results. If you are familiar with TGVDenoise, then this tool will be straigtforward to use. The strenght parameter behaves a bit different, but at the end there are the same relevant 3 parameters (strength, edge protection and smoothness). I've included also the same deringing algorithm as in the previous deconvolution, since we cannot quarantee that the PSF model is accurate, so ringing may happen anyway... but, so far in my tests, I have not used this option. So, what you have seeing here is just fine tunning the same 3 parameters as in TGVDenoise (given that you know/have the PSF).

 
I CAN'T BELIEVE IT!!!!!!!  :surprised: :surprised: :surprised: :surprised: :surprised: :surprised: :surprised: :surprised: :surprised:

I can't wait to see it in action on planetary and deep sky images!

looking forward!

Thank you guys!
 
jeffweiss9 said:
...I've never been able to get consistent results with it except for a few times where I hit the lottery on my parameter and mask settings.

As the guy who wrote the Deconvolution tool eight years ago, I can feel involved with the implications of this assertion. Can you upload one of those images where you are getting inconsistent results?

if I deconvolve at all, have to use AIP4WIN Lucy-Richardson which, if nothing else, operates consistently and produces results for any image I give it.

Please realize that we are not amateurs, so this is not a hobby for us. Software development is our profession, and PixInsight is a professional software development project. Competing applications and their developers are not our "friends" or "buddies". Since we have to pay invoices and salaries we have to sell licenses, so there are no jokes here.

So naming other applications is not the most productive way to express an adverse opinion about our software on this forum. Asking specific questions, uploading data for evaluation, reporting bugs, requesting new features or just criticizing us constructively, are much more efficient options.
 
Juan-
  I just uploaded the image I (last) was unable to deconvolve properly (m8182LStk35cr_DBE.fit which is the one; but also uploaded a version prior to DBE), in jweiss directory in forum shared files. It is the result of 35 8-minute Luminosity subs of M81/M82 with IFN and SN2014J through BatchPreprocessing and separate ImageIntegration.  I didn't mean to offend you, although obviously my quoted statements had that effect.  I would love to learn a methodology that a time- and patience-limited amateur like myself can handle.  I am trying to achieve my goal of ridding my workflow of its 1 or 2 residual non-PI steps since I do believe PI has the best algorithms, in general, that are out there, once I'm able to master them.
  Respectfully,
Jeff
 
I have a suggestion for the new convolution process: In many of my images the right value for the deringing parameter is less than 0.01. However the slider has a precision of 0.01 units. So for selecting a smaller value I have to write it on the textbox. I think that you should set a smaller range in the slider and increase the precision, i.e. min:0 max:0.2 inc:0.001. That is only 200 steps.

Also, the default value is 0.1. This is way too much for most linear images. I have seen users complaining about that deringing destroys their images, and I think it is because of the default parameter. I think it would be better to have as default a value that does little than a value that wrecks the image.
 
Jeff,

Thank you for uploading your data. This is a brief tutorial with your image. I have tried to follow a systematic step-by-step procedure to apply our deconvolution tools in a way that can yield consistent results for most deep-sky images.


Step 1: PSF Estimation

Our deconvolution tools (including the incoming TGVRestoration tool) don't implement blind deconvolution algorithms. This means that an accurate estimate of the PSF of the image is crucial to perform a meaningful deconvolution. Of course, I don't need to say that these processes, PSF measurement and deconvolution, only make sense with linear data.

DynamicPSF is the tool of choice for PSF estimation in PixInsight:


I have selected a total of 52 stars. As you know, we normally don't need hundreds of stars to get a good PSF estimate. Just a few tens of carefully chosen stars (unsaturated, neither too bright nor too dim stars) are sufficient. From the 52 stars I have selected the best 40, after sorting the list by mean absolute difference.


Then I have obtained the PSF estimate as a new image with DynamicPSF's Export synthetic PSF option.



Step 2: Linear Mask

Deconvolution must only be applied to high-signal areas of the image. This clearly excludes the sky background and dim structures such as IFNs in this case. These marginal data components can be protected with a suitable mask. Normally this step is not necessary with our Deconvolution tool, since regularized algorithms are already very efficient at preserving nonsignificant structures, preventing noise amplification on these regions. However, I prefer to include a mask generation task in this case to provide you with a more complete example.

Typically, background protection masks are generated by applying nonlinear transformations, such as a histogram transformation with a non-neutral midtones balance, or a gamma stretch. This generates a nonlinear stretched version of the image that tends to protect dark regions. Actually, this is a conceptual error: If we want to protect image structures as a function of the noise-to-signal ratio, we need a mask where pixel values are a function of the signal, but with a nonlinear mask generated as described above, this cannot be guaranteed in general.

A linear mask, that is a mask where linearity of the original data is preserved, is an accurate and easy-to-build protection mask for this purpose. Linearity means that only linear operations must be applied to a duplicate of the image. We begin with a straight multiplication:


The constant 60 acts like an amplification factor. The brightest areas of the image become completely saturated (white), which means that they will be fuly deconvolved. The next step is a linear histogram clip at the shadows:


and a convolution with a small Gaussian filter. This makes the mask more robust to local noise variations. The smoothing applied must not be too strong, or the mask will become inaccurate (for example, dim stars can become wrongly protected if the mask is too smooth).


Finally, the mask has to be activated for the target image:


As I have said, a protection mask is normally not necessary with our Deconvolution tool, since regularization of deconvolution algorithms already does the same job very efficiently. So this step is optional. Linear masks are however very efficient for noise reduction of linear images. In fact, I have implemented an integrated linear mask generation feature in the MultiscaleLinearTransform and MultiscaleMedianTransform tools that we have recently released. But let's stay on topic.


Step 3: Local Deringing Support

A local deringing support is a special image used by the Deconvolution tool to drive a deringing routine that works at each deconvolution iteration to limit the growth of ringing artifacts. Although a deringing support image looks and is built like a mask, it is important to point out that it is not a mask and does not work as such.

For a deep-sky image, a local deringing support image can be built just as a star mask. In this case, instead of the StarMask tool (which is currently subject to a deep revision), I'll implement a step-by-step procedure based on multiscale analysis. We begin with a stretched duplicate of the image. I have just transferred STF parameters to HistogramTransformation.


The next step is a strong instance of HDRMultiscaleTransform. The purpose of this process is to flatten the image, so jump discontinuities (e.g., stars) can be isolated more easily in subsequent steps.


The starlet transform can be used to remove all large scale structures and high-frequency noise. This is achieved by disabling the residual and first layers, respectively.


A histogram stretch will intensify deringing support structures:


followed by a convolution to make them larger:


Finally, a sequence of histogram stretches and convolutions allows us to achieve the degree of protection required:


Basically, we want to protect bright stars and other high-contrast, small-scale structures, which are the structures where ringing becomes particularly problematic. To control deringing support generation, we can duplicate the image and activate the deringing support as a mask.



Step 4: Deconvolution

We have now everything we need for deconvolution: a linear image with a reasonable amount of signal, a good local deringing support and, as an option, a linear mask that will provide additional protection to low-signal regions.

This is a preview on M82 before deconvolution:


and after deconvolution:


Note that ScreenTransferFunction has to be used to reveal significant structures within the brightest areas of the image, where we want to control how deconvolution is doing its work. This is the same comparison for M81, before deconvolution:


and after deconvolution:


The best way to learn how the most critical deconvolution parameters work is by comparing the results with and without them. This is a more stretched view of the deconvolved M81 preview with regularization:


and without regularization:


Note the noise intensification when we disable deconvolution regularization. In this case I have lowered the regularization threshold for the second wavelet layer; perhaps this has been an error.

This is what happens if we disable deringing:


and this is the result without local deringing:


Global deringing always degrades the result of deconvolution to some degree, so we always must try to find the smallest value of the global dark parameter that prevents ringing. If we use a good local deringing support, we can use an even smaller global dark value because local deringing is very effective to prevent ringing around the brightest stars. The optimal values, as always, depend on the image.

Also take into account that a very small amount of ringing can actually be beneficial because it increases acutance, which leads to a higher visual perception of detail.


Step 5: Nonlinear Stretch

The next processing steps should include some noise reduction on the linear image after deconvolution. For this purpose MultiscaleLinearTransform, MultiscaleMedianTransform (especially the new median-wavelet algorithm) and TGVDenoise are the tools of choice. I'll skip these tasks and will go directly to the nonlinear stretch step. In this case I have just transferred STF to HistogramTransformation.


Step 6: HDR Compression

Some HDR compression will allow us to reveal the result of deconvolution on high signal areas. The HDRMultiscaleTransform tool yields a very nice result for this M81/M82 image:


This is a 1:1 view on M82:


and M81:



I hope this brief tutorial will help you to achieve better results with our deconvolution tools from now on. Let me know if you want more information on the applied processes, or more detailed descriptions of some processing steps.
 
Juan

Thats a very useful decon walkthrough. One question - I realise the local support image is not a mask but should it be inverted (white background) prior to be used in the deconvolution tool?

Chris
 
That's just great, Juan.  I'll be studying it further in detail and I'm sure it will help a lot of folks.
Thanks very much.
-Jeff
 
chris.bailey said:
Juan

Thats a very useful decon walkthrough. One question - I realise the local support image is not a mask but should it be inverted (white background) prior to be used in the deconvolution tool?

Chris

I'm not Juan, but I think that the right answer in NO, you do not have to invert the local deringing support: such a "mask" tells to the deconvolution process how much deringing has to be applied locally.
You need much more deringing around the stars, so stars should be white on a dark background where local deringing is not needed and only global deringing applies.

Is it correct Juan?

bye

Edoardo
 
Very interesting deconv example, Juan, thank you.

Two things called my attention. First, the "unusual" way of creating a star mask for deringing support, which begs the question: what should we expect from the new/revised  StarMask module?
Second: I typically build a support mask to cover the bigger/brightest stars, but I see in your example that pretty much all stars are supported. On the positive side, I see that this prevents rings around them, but on the negative side they are (apparently) not reduced in size. In my experience with deconv, medium and small size stars are nicely reduced (fwhm cut in half) if not supported, and without darks rings around them. In some cases, when such stars are embedded in a region with even and relatively bright background, then it is harder to come up with the proper deringing parameters if unsupported, and there is always a tradeoff.

Thoughts/comments?

Ignacio
 
Thanks Ignacio. The StarMask tool has three well-known problems:

- It is very slow.
- It requires a lot of trial/error work.
- It is not previewable.

These limitations cause StarMask to be difficult to use in many practical cases, even with modern hardware. This obviously has to be addressed with a revision of the tool. If I remember well, I wrote StarMask at least seven years ago, so a complete redesign/reimplementation is in order.

Processing wise, stars and other high-contrast, small-scale structures can be considered as singularities where most image processing algorithms fail or are not applicable. Consequently, having efficient tools to isolate stars is of the highest importance. We already have multiscale analysis tools that, if creatively used, allow for very efficient generation of star masks, and I just wanted to show a practical example in this tutorial. I probably was too exhaustive in the description of these techniques, and hence the generated deringing support is excessive. As you point out, the key word here is to find a compromise between ringing suppression and deconvolution efficiency.
 
Back
Top