The À Trous Discrete Wavelet Transform In PixInsight |
|
Working with extremely large characteristic scales Number of Noise Reduction Iterations |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
PixInsight LE includes now the full version of the ATrousWaveletTransform process, just as implemented in the standard edition of the application as of the date of release. ATrousWaveletTransform is an extremely rich and flexible processing tool that you can use to perform a wide variety of noise reduction and detail enhancement tasks. The à trous (with holes) algorithm of discrete wavelet transform is an elegant and powerful tool for multiscale (multiresolution) analysis of images. For references and background information on this and many other exciting, state-of-the-art image processing techniques, visit Jean-Luc Starck's website. With ATrousWaveletTransform you can perform a hierarchical decomposition of an image into a series of scale layers, also known as wavelet planes. Each layer contains only structures within a given range of characteristic dimensional scales in the space of a scaling function. The decomposition is done throughout a number of detail layers defined at growing characteristic scales, plus a final residual layer, which contains the rest of unresolved structures. By isolating significant image structures within specific detail layers, detail enhancement can be carried out with high accuracy. Similarly, if noise occurs at some specific dimensional scales in the image, as is usual in most cases, by isolating it into appropriate detail layers we can reduce or remove it without affecting significant structures. ATrousWaveletTransform comprises two main sets of parameters to define the layered decomposition process and the scaling function used for wavelet transforms, respectively. This defines which range of characteristic scales is assigned to each detail layer. In PixInsight LE 1.0, the scaling sequence parameter can be either linear or dyadic:
Scale ranges are expressed in pixels. In general, however, scale ranges in wavelet decompositions shouldn't be interpreted literally, as if a given wavelet layer was containing image features measuring exactly the amount of pixels indicated. In wavelet processing, scales are relative to the particular scaling function used to perform a wavelet transform. Wavelet scaling functions play an essential role that we describe later on this document.
This is the total number of generated detail layers. This number does not include the final residual layer, which is always generated. In PixInsight LE 1.0, you can work with up to twelve wavelet detail layers, which allows you to handle structures at really huge dimensional scales. Modifying large scale structures can be very nice when processing many deep-sky images.
Please note that the amount of memory required to perform an à trous wavelet transform increases rapidly as the size of characteristic scales grows. For example, with dyadic sequencing and 12 detail layers, you can work with scales of 2048 pixels. With our present implementation, for a 3000×3000 pixels image and a 3×3 scaling function, for example, such decomposition is just to the efficiency limit on a system with 1 GB of RAM (i.e., just before requiring additional virtual memory on disk, which slows down things dramatically). Furthermore, take into account that to effectively apply transformations at large dimensional scales, the target image must be sufficiently large. For example, you probably couldn't preview a transformation defined for a scale of 512 pixels on a preview just 300 pixels wide. Our present interface implementation does not prevent you to run out of scale with a wavelet transform. However, when this happens the processed image usually shows enough artifacts as to let you know that something is wrong without doubt.
The following parameters are available on a per-layer basis: Individual layers can be disabled. A disabled layer does not enter the reverse wavelet transform used to generate the final resulting image. By appropriately disabling small-scale detail layers, an effective noise reduction can be achieved when the disabled layers don't include significant structures. Another application of disabled layers is to isolate specific characteristic scales in order to extract some particular image structures of interest. For example, this can be effectively used to extract stellar objects with the purpose of building star-protection masks.
This is a real number ranging from 5 to +10. The bias parameter value defines a linear, multiplicative factor for a specific layer. Negative biases decrease the relative weight of the layer in the final processed image. Positive bias values give more relevance to the structures contained in the layer.
ATrousWaveletTransform implements a multiscale noise reduction mechanism. For each detail layer, specific sets of noise reduction and detail enhancement parameters can be defined and applied simultaneously. This gives great flexibility and accuracy to deal with both aspects of image processing. On one hand, noise can be surgically suppressed or reduced, because it can be isolated within the particular dimensional scales where it mainly occurs. Another great advantage is that noise reduction and detail enhancement parameter sets can be mutually optimized, since you actually see how each set interact with the other.
Considered as a whole, when you use the ATrousWaveletTransform process for detail enhancement, what you are applying is essentially a high-pass filtering process. High-pass filters suffer from the Gibbs effect, which is due to the fact that a finite number of frequencies have been used to represent a signal discontinuity in the frequency domain. On images, the Gibbs effect appears as dark artifacts generated around bright image features, and bright artifacts around dark features. This is the well-known ringing problem. Ringing is an extremely annoying and hard to solve issue in image processing. You probably have experienced this problem as black rings appearing around bright stars after unsharp mask or deconvolution. However, ringing doesn't occur only around stars. In fact, you'll get ringing to some degree wherever a significant edge appears on your image and you enhance it, including borders of nebulae, galaxy arms, and planetary details, for example. In all of these cases, ringing is actually generating erroneous artifacts as a result of some limitations inherent to the numerical processing resources employed. Whether some ringing effects are admissible or not for a particular image is a matter of taste and common sense. Our ATrousWaveletTransform implementation includes an efficient procedure to fix the ringing problem on a per-layer basis. It can be used for enhancement of any kind of images, including deep-sky and planetary, and can be fully controlled with just a couple of parameters in a highly interactive way:
The problem here is that if you avoid ringing too much, maybe your detail enhancement efforts will be of little or no use, or even you may get somewhat artificial results. On the other hand, too low of a deringing amount, or too high of a deringing threshold, will not fix ringing properly. As a general rule of thumb, start with intermediate values: amount = 0.5 and threshold = 0.05 (default value), and see what happens. If nothing happens, try reducing threshold; for example, just halve its previous value (0.025). Perhaps you may need a very low threshold setting for some images. When you find a threshold value where deringing benefits become obvious, try tuning the deringing amount parameter. Don't discard having to readjust your layer biases as deringing starts to play.
With PixInsight's ATrousWaveletTransform process, you can specify a scaling function used for decomposition with the à trous discrete wavelet transform algorithm. The scaling function plays an essential role in advanced wavelet multiscale image processing. By appropriately tuning the shape and levels of the scaling function, you gain full control on how finely the different dimensional scales are separated. In general, a smooth, slowly varying scaling function works well to isolate large scales, but it may not provide resolution enough as to decompose images at smaller characteristic scales. Oppositely, a sharp, peakwise scaling function may be very good isolating small scale image features such as high-frequency noise, faint stars or tiny planetary and lunar details, but quite likely it will be useless to work at larger scales, as the global shape of a galaxy or large Milky Way structures, for example. A scaling function in PixInsight is defined as a kernel filter, that is, a square grid where discrete filter values are specified as single numeric elements. The à trous algorithm requires the scaling function to be a nonzero low-pass filter. However, PixInsight's interface does not impose specific limits in this and other regards with the purpose of giving you full freedom for experimentation and fun. In PixInsight LE 1.0, à trous wavelet scaling functions are defined as odd-sized square kernels. Filter elements are real numbers. Most usual scaling functions are defined as 3×3 or 5×5 kernels.
Noise thresholding is a brute force, naive technique to reduce noise in images. It works by setting to zero (or to a conveniently low value) all pixels falling below a limit given in terms of a previously measured noise level. This thresholding noise level is derived from sigma (standard deviation) values through an iterative calculation, assuming a Gaussian distribution of noise over the whole image. However, if we apply noise thresholding to some individual, small-scale detail layers after wavelet decomposition, the procedure becomes not so naive. The reason for this enhancement in cleverness is that sigma-based noise level estimates are much more likely to characterize true noise if we constrain its calculation to a small range of high-frequency image structures. When activated, noise thresholding is applied to the first four detail layers. This technique will work properly (well, just as intended, actually) if you select the dyadic layering sequence.
In general, the layered multiscale noise reduction techniques included in our implementation offer much more efficient and flexible noise reduction than noise thresholding. However, this technique has other applications falling outside the noise reduction field, especially in mask generation.
When a nonzero bias parameter is applied to any layer in the ATrousWaveletTransform process, it increments or decrements (depending on the bias sign) the relative importance of that layer (i.e., of the features defined within that layer) in the final processed image. This biasing procedure is a quite intuitive and efficient way of enhancing image structures at selected scales. However, a side effect of the biasing process is that some bright or dark image features may easily become saturated as pure white or black, respectively. Of course this may become objectionable in most cases. Why do some image features get saturated? Because in order to improve details, but keeping the overall brightness of the image at the same time, contrast must be necessarily increased. This forces some areas to reach the upper or lower limits of the available dynamic range. How extense are these saturated areas depend on which dimensional scales have been enhanced. Dynamic range extension works by increasing the range of values that are kept and rescaled to the [0,1] standard range in the processed result.
|