Thank yo bitli!
After preprocessing with the starlet transform the image has virtually no noise...
Before using the StarMask tool, I used MLT to apply a wavelet transformation where the first and residual layers were removed. With the above sentence I mean that after removing these wavelet layers, the preprocessed image has almost no noise (in fact, the background is almost black).
...so we can use this parameter effectively to control inclusion of dim stars in the mask.
StarMask's threshold parameter is intended to exclude the noise in the generated star mask. However, since there is no significant noise after preprocessing with wavelets in this case, the threshold parameter can be used to filter out faint stars, which are treated as "noise" in this context.
Sorry for the confusion. I am speaking of a duplicate of the image used to generate the star mask, not of the image being processed. I'll think on a way to make this part of the tutorial clearer.
In this example you do denoising after the deconvolution, or was it already a denoising before? My current understanding is that the order should "in theory" not matter, as we apply denoising to low SNR parts and deconvolution to the high SNR parts, assuming reasonable images (which I rarely have).
Actually, regularized deconvolution does precisely that: separate the data into signal and noise at each deconvolution iteration, and deconvolve only the signal component. I would never apply noise reduction before deconvolution. With the regularized algorithms that we have implemented (and also with the next generation of restoration tools that we'll release), the noise is not a problem, as you can see in the tutorial. Moreover, deconvolution should always work with the original data before any noise reduction, since the regularization algorithms make some assumptions on the noise distribution that won't hold after noise reduction.