PixInsight Forum
PixInsight => Image Processing Challenges => Topic started by: Carlos Milovic on 2012 June 11 17:58:00

As some of you know, we are developing new noise reduction tools. As part of some early experiments, I created a synthetic image, composed by two linear gradients. To this, gaussian noise has being added to simulate a real, noisy image. Attached are the original image, and three noisy images, with different degrees of noise.
So, the challenge is this: Use all the tools you want to process the images (either inside PI, and with other software) and publish your best results. If you upload results done with different tools (for example, comparing your best results with GREYCstoration and ACDNR) that would be greatly appreciated. Please accompany your results with detailed description of the steps (if there are more than one), all the parameters that you adjusted, approximated time of execution, and some comments of yours about the sensibility of the parameters and easy of use.
Results should be evaluated by three criteria:
 Edge preservation
 Presence of artefacts (Gibbs effects, spurious pixels, staircase effects, etc).
 Smoothness of the gradients.
Images may be rescaled for display or comparison, after noise reduction. If you want to compute an error measurement, it should be the mean quadratic difference between the original and the denoised image, or the mean absolute difference.
Have fun! :)
I'll upload the results of the new algorithm in a few days.

Here we go! :D

Very nice results, Enzo! I'm specially surprised by the performance of ACDNR in the harder example. Was it difficult to fine tune the parameters?

Not much (which required more trial and error was GREYCstoration and MMT). Images with stars, color, fine structures and various noise types are more complicated I think. I suspect also that MMT can achieve better results.
Greetings,
Enzo.

Well, since there are no more inputs, I'll show the results I've got with the new algorithm. It is based on Total Generalized Variation. Simple put, is a diffusion problem (i.e. two fluids interacting) that is set to evolve until a steady state is reached. This physical evolution has some restrictions to preserve edges, and to generate smooth surfaces when the gradient involves are low, thus, avoiding staircase artefacts.
The nice thing about this algorithm is that it is also useful to regularize deconvolutions, and other inverse problems. So, I may include it in other processes as well.
The PixInsight implementation is going to wait for a while... I'm still experimenting with the algorithm, and designing new ways to improve it. Specially, I'm looking for an statistical method that spatially modifies the strength of the algorithm (this may result in the deprecation of the use of masks).
I'm going to post another challenge, with real data, to compare algorithms. Also, it would be great if someone attacks the previous challenge with another approach.

Well, since there are no more inputs,
Time, time, time!!!. I don't know where to find it!
Here is my approach always with more than one tool in each case. It took me about five minutes for gauss 10 and 20 and half an hour for gauss40.
Saludos!, Alejandro

Very nice! I'm impressed by your last result. I'm gonna check your icons tonight.

New challenge, now with real world data:
This is a crop of CCD data, green filter, already stretched, but no further processing. This corresponds to an integrated and calibrated master light.
Good luck!
PS: Alejandro, clever approach. I played with your icons. It seems that a soft threshold yields a bit better results, in ATWT. Also, the first two processes of your chain worked quite well with this new challenge, but the third failed completely.
PS2: As before, I'll upload the results of the new algorithm in a couple of days.

Hi Carlos
Here is your original image :
(http://www.astroccd.eu/PIXINSIGHT/i1.png)
Here is my processed version
I wanted to preserve maximum details, I used MMT with your original image as a mask :
(http://www.astroccd.eu/PIXINSIGHT/i2.png)
(http://www.astroccd.eu/PIXINSIGHT/Capture.png)
a blink is interesting to do to compare 2 images.
cheers

Hi Carlos
I started with CBR to minimize the lines and then used deconvolution and ATWT, in both cases with mask. I also deleted a black pixel with CloneStamp.
Attached are the process.
Rotate 90ยบ, apply CBR and rotate back again.
(http://3.bp.blogspot.com/NibB2NLKx6Y/T9z5MKj_SMI/AAAAAAAAAZ0/7o5GN_LkBxQ/s1600/Screenshot+from+20120616+18:07:34.png)
Repair black pixel
(http://4.bp.blogspot.com/ryqlnC_QMSw/T9z9fXl8YJI/AAAAAAAAAaQ/RynijY5gYtQ/s1600/Screenshot+from+20120616+18:40:40.png)
Generate starmask to use in deconvolution to protect stars
(http://2.bp.blogspot.com/TBI7BRvurIU/T9z4V7oMmnI/AAAAAAAAAZM/1azT7V0PBT0/s1600/Screenshot+from+20120616+18:09:38.png)
Clone the image, apply LHE and ATWT and use as mask
(http://3.bp.blogspot.com/dNNCCBARlxI/T9z5YGv2hI/AAAAAAAAAZ8/jRR9XoQsr5w/s1600/Screenshot+from+20120616+18:11:24.png)
Noise reduction with ATWT protecting stars with mask
(http://4.bp.blogspot.com/QlOR30Azfw4/T9z4coPcZYI/AAAAAAAAAZU/MP9BHJhduLY/s1600/Screenshot+from+20120616+18:13:42.png)
Final
(http://1.bp.blogspot.com/J4iGYplA8nI/T9z52u8IB4I/AAAAAAAAAaE/8dxXX90dYkE/s1600/m20_greennoise_G_procesed.png)
Saludos. Alejandro.

Hi guys
Sorry for the delay. Here is the result with the current implementation of the new noise reduction algorithm. No masks, and only 2 parameters to fine tune. I hope you like it.
Now I'm working on an adaptive version of the algorithm, that optimizes the search of the data consistency parameter. This will yield, in practice, only one critical parameter to fine tune.
Another path I'm following is using this algorithm to regularize deconvolutions. In the next post I'll show you an example with MRI data.

Here are the results of the use of TGV as a regularization constrain in a deconvolution.
This is an angiography image. I'm showing the original image. If we simulate a compressed sensing capture (that means, taking less samples... it would be something like having pixels not exposed to light) in the fourier space, this would be seeing like a complex bluring pattern. In this example, I simulated a 8x subsampling (randomly taking only 1/8th of the entire fourier space). At last, it is shown the reconstruction made with TGV.

Thanks Carlos
Looks very promising 8)
For sure PI will rock at the top of the top with these new functions !
Note there is some denoising process rendering difference between USA and Europe ! :police:
US users tend to completely denoise their images whereas europeans tend to preserve a little touch of noise to make the image more realistic.
It is not a critic. Just a way to see an image. Noise can be nice if well processed ;)
But Juan tends to be on the US way of denoising >:D >:D >:D >:D
So, sometimes, it is hard to exactly follow a processing tutorial. Some rendition are too much processed with not noise at all. So where you apply coefficient of 100%, I must go between 20 to 50% O:)
In my example I tried to go in US direction :angel:

Well, in fact, we started the "US way", quite a long ago... so it is the "hispanoamerican way". ;)
Talking seriously, I was very impressed by the first examples I saw with this algorithm. And it is quite new. The first publication was in 2009, while the papers I'm basing most of my implementations are from February and April this year. So, we can say with property that we are in the top of the wave.

Yes, I think this new algorithm is very powerful.
I read some papers about TGV (Total Generalized Variation, 2nd and 3rd order )

Did you find any advantage on the third order TGV? I have not seeing examples with it.

:D :D :D I said I read paper, I didn't say I understanded these papers :D :D :D :D
Normally, 3rd order should be better ?
http://math.unigraz.at/kunisch/papers/paper.pdf
page 28 : While the second order model tries to approximate the image based on ane functions, the third order model additionally allows for quadratic functions, which is clearly better in this case.

That paper is one of the hardest... I found that the latest one, from K. Bredies, was much more understandable. In fact, I'm working with the latest 2 papers, and the one applied to MRI.
It seems that I overlooked those results :P Now that you pointed them out, I can't see a "clear" advantage of TGV3 over TGV2, specially if we assume that that extra order will slow down the execution time at least a 10% (based on reported differences between TV and TGV, wich is TGV1). Indeed TGV3 seems a bit smoother, but I think that the same (or better) results may be achieved with the spatially dependent data fidelity term that I'm working on.

Hi,
Your last example is impressive. It seems that the SNR is not very high... Isn't it? If so, it's far better than the solutions we have already.
BTW, actually I think there are more important facts than noise defining different astrophotography schools.
http://www.astrophotographer.org (http://www.astrophotographer.org)
Best regards,
Vicent.

Hey Carlos
My replay may be somewhat late, but
I got this result only using only GRECstoration and a inverse lightness mask
Regards
Geert

Hi Carlos
Is there some news about this new algorithm ?
Best
Philippe

Hi Philippe
Sorry, I have not noticed your reply :P
The current status of the algorithm is this:
 There is a TGV denoise process module in Juan's hands, with a few problems in the code, and waiting for a major optimization. Once this is done, I'll start implementing deconvolution based TGV.
 I wrote a Matlab toolkit that implements a basic TGV regularization for denoising, deconvolution and compressed sensing problems (the later for Magnetic Resonance Imaging). It is not publicy avalaible, but I may share it for development purposes.
 There is also Matlab code for a spatially dependent TGV implementation. Not as nicely packed as the previous one, but it works.
 I'm implementing the SATGV algorithm in Matlab... it is a spatially dependent variation that automatically updates the data fidelity term, using a multiscale scheme. I still lack one fundamental piece of information there, and there are some bugs in the code... but this should be ready in the short term.
So, as soon as Juan finishes the 1.8 core application, and turn into the processing modules, we'll reactivate this project. I have a lot of code to translate, and new things to try.

Carlos, is the method noise for TGV as expected? I am wondering if the visible structure is excessive in the residuals. Apply an STF to see it better.
Thanks,
Mike

Hi Mike
Yes, the method is working as expected. If you compare the residual, you'll find that the structures that are a bit blured are smaller or equal to the noise level. This cannot be avoided, since TGV works like diffusion between two fluids. We may control the direction where that "water flows", but there is always a tradeoff between noise removal and loss of detail. I think that I may get better results with the new adaptative algorithm, which should preserve more details in the edges.

Good news here from the development team :)
We have a working TGVDenoise process, that is right now under testing and optimization.
First results are here, with the bigradient image, 20% and 40% guassian noise:

It looks very well! :) Congrats!!!
Saludos, Alejandro.

look forward to test this new algorithm !!! great work ! thanks