Noise Reduction Challenge

Carlos Milovic

Well-known member
As some of you know, we are developing new noise reduction tools. As part of some early experiments, I created a synthetic image, composed by two linear gradients. To this, gaussian noise has being added to simulate a real, noisy image. Attached are the original image, and three noisy images, with different degrees of noise.
So, the challenge is this: Use all the tools you want to process the images (either inside PI, and with other software) and publish your best results. If you upload results done with different tools (for example, comparing your best results with GREYCstoration and ACDNR) that would be greatly appreciated. Please accompany your results with detailed description of the steps (if there are more than one), all the parameters that you adjusted, approximated time of execution, and some comments of yours about the sensibility of the parameters and easy of use.

Results should be evaluated by three criteria:
- Edge preservation
- Presence of artefacts (Gibbs effects, spurious pixels, staircase effects, etc).
- Smoothness of the gradients.

Images may be rescaled for display or comparison, after noise reduction. If you want to compute an error measurement, it should be the mean quadratic difference between the original and the denoised image, or the mean absolute difference.


Have fun! :)
I'll upload the results of the new algorithm in a few days.

 

Attachments

  • bigradient.png
    bigradient.png
    624 bytes · Views: 1,391
  • bigradient_gauss10.png
    bigradient_gauss10.png
    50.3 KB · Views: 1,363
  • bigradient_gauss20.png
    bigradient_gauss20.png
    62.7 KB · Views: 1,373
  • bigradient_gauss40.png
    bigradient_gauss40.png
    79.9 KB · Views: 1,380
Very nice results, Enzo! I'm specially surprised by the performance of ACDNR in the harder example. Was it difficult to fine tune the parameters?
 
Not much (which required more trial and error was GREYCstoration and MMT). Images with stars, color, fine structures and various noise types are more complicated I think. I suspect also that MMT can achieve better results.

Greetings,

Enzo.
 
Well, since there are no more inputs, I'll show the results I've got with the new algorithm. It is based on Total Generalized Variation. Simple put, is a diffusion problem (i.e. two fluids interacting) that is set to evolve until a steady state is reached. This physical evolution has some restrictions to preserve edges, and to generate smooth surfaces when the gradient involves are low, thus, avoiding staircase artefacts.

The nice thing about this algorithm is that it is also useful to regularize deconvolutions, and other inverse problems. So, I may include it in other processes as well.
The PixInsight implementation is going to wait for a while... I'm still experimenting with the algorithm, and designing new ways to improve it. Specially, I'm looking for an statistical method that spatially modifies the strength of the algorithm (this may result in the deprecation of the use of masks).


I'm going to post another challenge, with real data, to compare algorithms. Also, it would be great if someone attacks the previous challenge with another approach.
 

Attachments

  • bigradient_gauss20_tgv.png
    bigradient_gauss20_tgv.png
    6 KB · Views: 1,265
  • bigradient_gauss40_tgv2.png
    bigradient_gauss40_tgv2.png
    5.7 KB · Views: 1,273
Carlos Milovic said:
Well, since there are no more inputs,
Time, time, time!!!. I don't know where to find it!

Here is my approach always with more than one tool in each case.  It took me about five minutes for gauss 10 and 20 and half an hour for gauss40.

Saludos!, Alejandro
 

Attachments

  • Gauss10_20_40_NoiseReduction.xpsm
    15.4 KB · Views: 109
  • bigradient_gauss40_NoiseReduction.png
    bigradient_gauss40_NoiseReduction.png
    8.8 KB · Views: 1,250
  • bigradient_gauss20_NoiseReduction.png
    bigradient_gauss20_NoiseReduction.png
    12.6 KB · Views: 1,228
  • bigradient_gauss10_NoiseReduction.png
    bigradient_gauss10_NoiseReduction.png
    12 KB · Views: 1,243
New challenge, now with real world data:

This is a crop of CCD data, green filter, already stretched, but no further processing. This corresponds to an integrated and calibrated master light.

Good luck!


PS: Alejandro, clever approach. I played with your icons. It seems that a soft threshold yields a bit better results, in ATWT. Also, the first two processes of your chain worked quite well with this new challenge, but the third failed completely.
PS2: As before, I'll upload the results of the new algorithm in a couple of days.
 

Attachments

  • m20.tar.gz
    224.5 KB · Views: 95
Hi Carlos

Here is your original image :
i1.png


Here is my processed version
I wanted to preserve maximum details, I used MMT with your original image as a mask :

i2.png


Capture.png

a blink is interesting to do to compare 2 images.

cheers
 
Hi Carlos

I started with CBR to minimize the lines and then used deconvolution and ATWT, in both cases with mask. I also deleted a black pixel with CloneStamp.
Attached are the process.

Rotate 90?, apply CBR and rotate back again.
Screenshot+from+2012-06-16+18:07:34.png



Repair black pixel
Screenshot+from+2012-06-16+18:40:40.png


Generate starmask to use in deconvolution to protect stars
Screenshot+from+2012-06-16+18:09:38.png



Clone the image, apply LHE and ATWT and use as mask
Screenshot+from+2012-06-16+18:11:24.png



Noise reduction with ATWT protecting stars with mask
Screenshot+from+2012-06-16+18:13:42.png



Final
m20_greennoise_G_procesed.png


Saludos. Alejandro.
 

Attachments

  • M20_greennoise_G.xpsm
    13.3 KB · Views: 71
Hi guys

Sorry for the delay. Here is the result with the current implementation of the new noise reduction algorithm. No masks, and only 2 parameters to fine tune. I hope you like it.

Now I'm working on an adaptive version of the algorithm, that optimizes the search of the data consistency parameter. This will yield, in practice, only one critical parameter to fine tune.
Another path I'm following is using this algorithm to regularize deconvolutions. In the next post I'll show you an example with MRI data.
 

Attachments

  • m20_tgvdenoise.tif
    217.4 KB · Views: 1,269
Here are the results of the use of TGV as a regularization constrain in a deconvolution.

This is an angiography image. I'm showing the original image. If we simulate a compressed sensing capture (that means, taking less samples... it would be something like having pixels not exposed to light) in the fourier space, this would be seeing like a complex bluring pattern. In this example, I simulated a 8x subsampling (randomly taking only 1/8th of the entire fourier space). At last, it is shown the reconstruction made with TGV.
 

Attachments

  • angio_tgv.png
    angio_tgv.png
    29.8 KB · Views: 99
  • angio_x8.png
    angio_x8.png
    30.7 KB · Views: 88
  • angio.png
    angio.png
    30.2 KB · Views: 98
Thanks Carlos
Looks very promising  8)
For sure PI will rock at the top of the top with these new functions !


Note there is some denoising process rendering difference between USA and Europe ! :police:
US users tend to completely denoise their images whereas europeans tend to preserve a little touch of noise to make the image more realistic. 
It is not a critic. Just a way to see an image.  Noise can be nice if well processed  ;)
But Juan tends to be on the US way of denoising  >:D >:D >:D >:D
So, sometimes, it is hard to exactly follow a processing tutorial. Some rendition are too much processed with not noise at all. So where you apply coefficient of 100%, I must go between 20 to 50%  O:)
In my example I tried to go in US direction  :angel:
 
Well, in fact, we started the "US way", quite a long ago... so it is the "hispanoamerican way". ;)

Talking seriously, I was very impressed by the first examples I saw with this algorithm. And it is quite new. The first publication was in 2009, while the papers I'm basing most of my implementations are from February and April this year. So, we can say with property that we are in the top of the wave.
 
Yes, I think this new algorithm is very powerful.
I read some papers about TGV (Total Generalized Variation, 2nd and 3rd order )
 
:D :D :D I said I read paper, I didn't say I understanded these papers  :D :D :D :D


Normally, 3rd order should be better ?

http://math.uni-graz.at/kunisch/papers/paper.pdf

page 28 :  While the second order model tries to approximate the image based on ane functions, the third order model additionally allows for quadratic functions, which is clearly better in this case.
 
That paper is one of the hardest... I found that the latest one, from K. Bredies, was much more understandable. In fact, I'm working with the latest 2 papers, and the one applied to MRI.

It seems that I overlooked those results :p Now that you pointed them out, I can't see a "clear" advantage of TGV3 over TGV2, specially if we assume that that extra order will slow down the execution time at least a 10% (based on reported differences between TV and TGV, wich is TGV1). Indeed TGV3 seems a bit smoother, but I think that the same (or better) results may be achieved with the spatially dependent data fidelity term that I'm working on.
 
Hi,

Your last example is impressive. It seems that the SNR is not very high... Isn't it? If so, it's far better than the solutions we have already.

BTW, actually I think there are more important facts than noise defining different astrophotography schools.

http://www.astro-photographer.org


Best regards,
Vicent.
 
Hey Carlos
My replay may be somewhat late, but
I got this result only using only GRECstoration and a inverse lightness mask


Regards
Geert
 

Attachments

  • m20_greennoise_G_gvs.jpg
    m20_greennoise_G_gvs.jpg
    133.8 KB · Views: 126
  • Photo1.jpg
    Photo1.jpg
    46.9 KB · Views: 77
Back
Top