This deserved a new topic, so here it is.
DISCLAIMER: This is a development module. Before installing it, make a buckup of your current (official release) TGV module (in PixInsight/bin). Compatibility of process icons and projects are not guaranteed. Also, many things could change in future releases, including the GUI and internal behaviour.
The download links:
Windows:
http://www.astrophoto.cl/dev/TGV-pxm.dllLinux:
http://www.astrophoto.cl/dev/TGV-pxm.soInstallation options:
a) Replace the current TGV module in PixInsight/bin with the files above. NOT RECOMMENDED
b) In PixInsight, go to Manage Modules, and unistall TGV. Close PI and launch it again. Copy the new TGV module to a new folder (for example, create: PixInsight/development ). Install the module from that folder.
DISCLAIMER: As said before, we do not guarantee compatibility with previous icons or projects that contains instances of TGVDenoise. The internal behaviour of this new process is a bit different, and some parameters have being renamed or changed. Also, more parameters have being added.
The current TGV module has 2 processes (we'll add a third in the next days, TGVRestoration, for deconvolutions). Below you'll find a brief explanation of them:
---------------------------------------------------------------------------------
- TGVDenoise
This process represents the next "generation" for TGVDN. More powerfull and flexible. The user interfase changed a lot from previous releases. Now there are three groups of parameters. Those related to the Regularization process (the noise reduction itself), the Image Model (how does the noise distribute) and the parameters that controls the iterations. Let's review these parameters.
+ Image Model: We'll begin describing the parameters of this section for a simple reason: You should set these parameters at the start, and don't worry too much about finetunning them. In most cases, you'll just need to adjust later the regularization parameters.
Noise model: Here you have three options: Gaussian, L1 Norm and Poisson. This option governs the inner engine, how the algorithm estimates the degree it can trust the pixel values. The gaussian option assumes that there is a single, additive source of noise. L1 Norm assumes that there are only a few pixels that are too far away from the real value. Poisson, on the other hand, assumes that we are dealing with linear images, with dominating photon noise (and thus, noise has greater amplitude for brighter samples). In practice, Gaussian should be a better choise for non-linear images, while Poisson should be better for linear ones. L1 Norm, does a fairly good job on both cases.
Additive noise: This value should be equal to the standard deviation of the additive noise. You may measure it from an homogeneus zone of the image, ideally the background sky. If you are using the Poisson model, this could be the readout noise (in the normalized range).
SNR strength: Probably this is the most counter-intuive parameter in this process. This parameter introduces a non-linear term in the detection of edges. Basically, it tries to model the photon noise, by varying the amplitude of the standard deviation depending on the brightness of the object. For a single frame, with unit gain, this value should be 1.0 (the ideal Poisson process). For a stack of many images, you should lower the value. For single images that has being rescaled, it should be higher than 1.0. To disable it, set to zero (or the lower possible value). Please consider that this non-linear term is used for every noise model avalaible. The model option you had before tells the algorithm how to trust the data. Both the Additive noise and SNR strength parameters tell the algorithm the amplitude of the expected noise at a given pixel. In practice, you'll see that the SNR strength parameter allows you to target more specifically wheter you want to smooth more the shadows or the highlights.
+ Regularization parameters:
Weight: This is equivalent to the old Strength parameter. Weight is a better name, because it weights the regularization process (the noise reduction) against the data fidelity. Once you have set the Image model parameters, this should be the most critical parameter to fine-tune.
Edge protection: Although the name is the name as in the previous release, now this behaves in a different way. This is a measure in standand deviations where to let the diffusion process to happen, or, in other words, where to set the edge detection. In practice, you may fine tune this paramer in the 1 to 3 range. Our advice is to use a fixed value, and just work with the Weight parameter.
Shapness: This is similar to the old Smoothness parameter. We changed the name because it was somewhat misleading. In reality, this terms controls the balance between the first and second order changes in the diffusion process. What this means, is that it controls how flat or oscilating the small gradients should be. This, in turn, affects also the sharpness of some borders. So, if you want to increase the sharpness of strong borders, and promote flat surfaces, you should increase this value. To allow more oscillations, which in turn may generate broader borders (maybe fuzzier), decrease it. In practice this parameter should be between 0.5 and 2.0.
+ Iterations parameters
They are self-explanatory, and beheave in the same way as in the previous release. As always, use from 100 to 300 iterations for quickly try parameters, but for a true convergence of the algorithm more than 1000 are recommended. More iterations do not mean that the noise reduction is going to be stronger. You may think as all the TGV processes as a simulation of the diffussion process of fluids. The number of iterations is the time (seconds) you are allowing the experiment to go on.
To have a better understanding of this, we have included a new option, "Preview window" that will shoud the current state of the solution at every iteration step.
----------------------------------------------------------------
- TGVInpaint
This process replaces the contents of pixels with the information that comes from nearby pixels. It is similar to what the DefectMap and CosmeticCorrection performs, but with some advantages: The diffusion process is more aware of the gradients of the image, and thus is better at preserving some structures (although texture is not replicated). For this process to work is vital to provide a "mask" that indicates which pixels to replace. In this process, this mask should be black for those pixels, and white for the ones you want to keep untouched.
The "Precondition" parameter is a filter that is applied before the diffusion process, to help it calculate the gradients of the image. Without it, the process may still work, but should need a more carefull fine-tunning of the parameters, and in some cases, introduce wild oscillations that prevents the algorithm to reach a good convergence.
The Edge diffusion parameter is a threshold that sets the amplitude of the smallest true edges of the images. This is the same as the old Edge protection in the previous release of TGVDenoise.
The Noise reduction parameter works like the Weight parameter in TGVDenoise, but only for those pixels that are marked with white in the mask. By default, noise reduction is disabled (set to 0.0).
The number if iterations in this process could be low, to preserve more the oscilations (and a bit of texture) in your inpainted areas. Increase it to create smoother regions.
In the case of noisy images, you may consider to complement this process with NoiseGenerator, to emulate the texture of inpainted regions. You may use the same images provided here as mask, but inverted for NoiseGenerator.
---------------------------------------------------------
- TGVRestoration
This process is the application of the TGV regularizator to deconvolution problems. Most of the parameters work in the same way as in TGVDenoise (see the section Filter Parameters), with a few clarifications:
Weight: When we were working with the TGVDenoise process, this parameter was related to the strength of the noise reduction. If too high, the noise reduction would be too high. If too low, no changes were noticeable. In TGVRestoration it is still related to the strength of the noise reduction, but too low values do not return the original image. Instead, it returns a pure deconvolution, non-regularized.
Noise model: Here we have 4 options.
a) Gaussian: This model should work better with planetary images, and other high SNR images.
b) Poisson: This implements the data fidelity term using the Chambolle/Pock primal-dual algorithm. It uses an additive, non-linear term to modify the image at each iteration step.
c) EM - two stages: This is a variation of the Expectation-Maximization algorithm for the Total Variation regularizator, using TGV. The Expectation step is similar to an iteration of the Richardson-Lucy deconvolution, while the Maximization step is a iteration of the TGV noise reduction. Both steps are additive.
d) Regularized RL: This is a more direct variation of the Richardson-Lucy deconvolution, regularized by a TV function. In this case the regularizator acts as a multiplicative factor.
All three later options should work fine for astronomical deep sky images. How TGV has being incorporated is different, as is the deringing algorithm used at each case.
Deringing: This algorithm is different to the one found in the Deconvolution process. Here it acts as a limitation to the changes that are allowed in the deconvolution step (not in the noise reduction step). The deringing local support is usually tied to the Global deringing parameter, with the exception of the Regularized RL algorithm where they act in a different way. The algorithms implemented as deringing are experimental, so we'll like to hear from your experiences.
---------------------------------------------------------
I hope you like them. Since they are development processes, we are open to suggestions, critics, and bug reports.
