PixInsight > Image Processing Challenges

Noise Reduction Challenge

(1/6) > >>

Carlos Milovic:
As some of you know, we are developing new noise reduction tools. As part of some early experiments, I created a synthetic image, composed by two linear gradients. To this, gaussian noise has being added to simulate a real, noisy image. Attached are the original image, and three noisy images, with different degrees of noise.
So, the challenge is this: Use all the tools you want to process the images (either inside PI, and with other software) and publish your best results. If you upload results done with different tools (for example, comparing your best results with GREYCstoration and ACDNR) that would be greatly appreciated. Please accompany your results with detailed description of the steps (if there are more than one), all the parameters that you adjusted, approximated time of execution, and some comments of yours about the sensibility of the parameters and easy of use.

Results should be evaluated by three criteria:
- Edge preservation
- Presence of artefacts (Gibbs effects, spurious pixels, staircase effects, etc).
- Smoothness of the gradients.

Images may be rescaled for display or comparison, after noise reduction. If you want to compute an error measurement, it should be the mean quadratic difference between the original and the denoised image, or the mean absolute difference.


Have fun! :)
I'll upload the results of the new algorithm in a few days.

Enzo De Bernardini:
Here we go! :D

Carlos Milovic:
Very nice results, Enzo! I'm specially surprised by the performance of ACDNR in the harder example. Was it difficult to fine tune the parameters?

Enzo De Bernardini:
Not much (which required more trial and error was GREYCstoration and MMT). Images with stars, color, fine structures and various noise types are more complicated I think. I suspect also that MMT can achieve better results.

Greetings,

Enzo.

Carlos Milovic:
Well, since there are no more inputs, I'll show the results I've got with the new algorithm. It is based on Total Generalized Variation. Simple put, is a diffusion problem (i.e. two fluids interacting) that is set to evolve until a steady state is reached. This physical evolution has some restrictions to preserve edges, and to generate smooth surfaces when the gradient involves are low, thus, avoiding staircase artefacts.

The nice thing about this algorithm is that it is also useful to regularize deconvolutions, and other inverse problems. So, I may include it in other processes as well.
The PixInsight implementation is going to wait for a while... I'm still experimenting with the algorithm, and designing new ways to improve it. Specially, I'm looking for an statistical method that spatially modifies the strength of the algorithm (this may result in the deprecation of the use of masks).


I'm going to post another challenge, with real data, to compare algorithms. Also, it would be great if someone attacks the previous challenge with another approach.

Navigation

[0] Message Index

[#] Next page

Go to full version