Hi Herbert,
These data are provided for regularized deconvolution algorithms (Richardson-Lucy and Van Cittert). At teach iteration, the following data are provided for the
residual image. The residual is the difference between the original image and the current deconvolved image (which is the image resulting from the previous iteration):
sigmaStandard deviation.
Delta_sigmaThis is the quotient:
(sigma0 - sigma)/sigma
where sigma
0 is the standard deviation of the previous residual (or the value of sigma in the previous iteration).
sigmasThis is an estimate of the standard deviation of the noise in the residual image.
nThe fraction of significant structures in the [0,1] range, where 1 corresponds to the number of pixels in the image. At each iteration, regularized algorithms divide the image into noise and significant structures. Significant structures are preserved in the solution and the noise is removed or attenuated.
These data can be used to know how the algorithms converge. Regularized algorithms are designed to converge to a solution where the deconvolved image cannot be further improved. If the algorithm converges, the sigma and Delta_sigma values decrease at each iteration. The noise (sigma
s) and the fraction of significant data (n) should stabilize after a sufficiently large number of iterations. With too many iterations the algorithms may start to diverge, which you can detect as negative Delta_sigma values. Sometimes this happens after a relatively short number of iterations. Sometimes the algorithms start converging after a short period of divergence.