Comparison of noise reduction algorithms: The double gradient synthetic image

Juan Conejero

PixInsight Staff
Staff member
Hi all,

Since we are discussing on noise reduction, let me put a more formal example to compare all the noise reduction algorithms that we have implemented in PixInsight. This time I'm going to use a well-known difficult problem: the double gradient synthetic image.

double_gradient.png

The double gradient consists of two concentric squares filled with opposite linear gradients. It can be generated very easily in PixInsight with the following PixelMath expression, executed globally to generate a new image, or locally on an image of the desired dimensions:

Code:
iif( X() < 0.25 || X() > 0.75 || Y() < 0.25 || Y() > 0.75, X(), 1 - 2*(X() - 0.25) )

When synthetic noise is added to this image, the result is an ill-posed problem for noise reduction, and is particularly well suited to show the weaknesses---and comparatively, the strong points---of different noise reduction algorithms. Here is the double gradient with a 25% of uniform noise added with the NoiseGenerator tool in PixInsight. This is the initial image that I have used in this test:

double_gradient_noisy.png

I have tried all the noise reduction tools available on the PixInsight platform, trying to do my best by fine tuning parameters to achieve the best possible result in each case. Here are the denoised images, enlarged 2:1 without interpolation:

TGVDenoise
TGVD.png

ACDNR
ACDNR.png

MultiscaleMedianTransform
MMT.png

GREYCstoration
GREYCs.png

ATrousWaveletTransform
ATWT.png

And here is my interpretation of these results:

* TGVDenoise is the absolute winner. It has been able to recover the original gradients almost perfectly with minimal generation of artifacts. Its superior result is clear and admits no discussion.

* ACDNR has been a nice surprise for me. I designed and implemented the ACDNR algorithm back in 2005. I know its weak points very well (it has many), but honestly, this result has been kind like a good old friend shouting "hey, I'm still here!" in my face :)

* The multiscale median transform (MMT) is a powerful tool for denoising and ringing-free sharpening of linear and nonlinear images, as we have demonstrated many times (see for example here and here), but it has two main limitations: Our implementation of MMT is not particularly good at reproducing sharp corners because it uses circular structuring elements to preserve isotropy, and median filters are not very good at reproducing smooth gradients. The next generation of MMT that we'll implement during the 1.8 cycle combines the ? trous wavelet transform for smooth regions and the MMT for significant structures, taking the best from each algorithm, so it will pass this test much better.

* The GREYCstoration algorithm yields a decent result, but it generates significant artifacts around the edges of the inner square (also at the right border of the image), and does not remove the noise on smooth regions as efficiently as the preceding tools. I spent a long time trying to improve this result, but this is the best one I was able to achieve. Perhaps somebody with more practice (I admit I don't use this tool very often) would be able to get something better.

* The ? trous (with holes) wavelet transform, also known as starlet transform recently, is a fundamental processing workhorse of great efficiency where isotropy and smoothness are two characteristic properties of the data, as happens with most deep-sky astronomical images. The inability of ATWT to isolate strong small-scale structures---something that MMT can do much better---becomes evident in the double gradient problem: When enough wavelet coefficients are removed to yield a smooth result, the edges of the inner square cannot be preserved because they penetrate the whole transform as a gun shot.

I have uploaded a PixInsight project that you can use to reproduce these results. If you manage to improve what I have done with some of the algorithms tested here, please let me know and post your parameters here.
 
It seems that high contrast areas (sharp edges) are hard to de-noise and not really indicative of how well an algorithm is suited for astro photography because they don't occur in a typical nebula or galaxy image.

Would it be possible to use a softer source image, add noise and then see how well each method does?

A great test though and impressive results, congrats!
 
Like Sander I tested some tweaking from your settings
For sure TGV is the best followed by ACDNR !

But, if I got some better results with a little noise as I like in astronomy images, I don't think the square double gradient image is similar to an astronomy deep sky image challenge.

For example, in LINEAR images, noise reduction is best now with TGV followed by ATWT (only the k-sigma noise thresholding option)
For NON-LINEAR, for sure TGV, then ACDNR

 
Anyhow, I'm very glad to hear that TGV is performing so well. :)
I hope that our future developments will increase its power to even higher levels. Right now we are working in a multiscale approach, that reconstrucs the image by adding details as it process the residual with varying strengh/local support. The magic is in the automatic calculation of a local support based on local variances. I hope it performs well here too.
 
Nocturnal said:
It seems that high contrast areas (sharp edges) are hard to de-noise and not really indicative of how well an algorithm is suited for astro photography because they don't occur in a typical nebula or galaxy image.

Would it be possible to use a softer source image, add noise and then see how well each method does?

Here is a completely different noise reduction problem, this time with a synthetic image generated with the SimplexNoise tool in PixInsight:

simplex.png

Simplex noise is a texture generation algorithm created by Ken Perlin in 2001. It is similar to Perlin noise, but much faster. I implemented a barebones simplex noise generation tool in PixInsight back in 2007. If you are interested in this topic, this slide show from the author is very interesting to understand how all of this stuff works, with some historical background. By all means, exploring Ken Perlin's website is obligatory if you are interested in anything related to computer graphics.

Despite the fact that PixInsight's implementation of simplex noise is very basic, one can do things like this in a couple minutes with the SimplexNoise and CurvesTransformation tools, plus the Spherize script:

planet-1.jpg

Returning to the subject of noise reduction, this is the simplex noise sample above with a mix of Gaussian and Poisson noise added with the NoiseGeneration tool, enlarged 2:1 without interpolation:

simplex_noisy.png

The noise has been added masked with the image itself. The result is an attempt to simulate the distribution of noise in a typical deep-sky image, with the purpose of testing the noise reduction algorithms on a smooth target. This simulation lacks small-scale image structures such as stars and ionization front edges, but my purpose is to provide a complimentary test to the first one with the double gradient image. Here are the results:

TGVDenoise
simplex_TGVD.png

ATrousWaveletTransform
simplex_ATWT.png

GREYCstoration
simplex_GREYCs.png

MultiscaleMedianTransform
simplex_MMT.png

ACDNR
simplex_ACDNR.png

Again, all tools have been applied without masks (not even ACDNR's built-in mask), trying to achieve the best possible result in each case.

The clear winner is again TGVDenoise. You may have to download the images and inspect them zoomed 4:1 to properly compare the results. As expected, ATWT performs extremely well for smooth targets. GREYCstoration also yields a very good result, but the strongest points of MMT and ACDNR definitely don't shine on smooth images without any edges like this one.

As happens with the double gradient image, this is a difficult target that tends to expose the weakest points of the most specialized algorithms. Some of these algorithms perform well for the double gradient and poorly for the simplex noise image (ACDNR), and vice versa (ATWT, GREYCstoration). I hope this will give you a more complete picture of the noise reduction tools that we have currently in PixInsight.

The bottom line is that TGVDenoise seems to outperform everything else that we have implemented in a variety of contexts. Does this mean that TGVDenoise will replace all of the other noise reduction tools? Well, not actually, and I have an example that shows this.

If you want to repeat this test yourself, I have uploaded the corresponding PixInsight project.
 
This is a terrific test Juan, thanks for considering my comments on the first one. I think you'll agree the results are more subtle than the first test where it was easy to pick out a winner because some of the losers really trashed the straight lines. I like your approach of adding image masked noise as it indeed emulates flux noise. You could also add a fixed noise component to emulate read-out noise. This to ensure that even black areas are not noise free. I doubt it would make a substantial difference in the test but hey, we're specialists here :)
 
Back
Top