The À Trous Discrete Wavelet Transform In PixInsight



::Index  <Prev  >Next

Global Layering Parameters

Scaling Sequence

Linear Sequence

Dyadic Sequence

Number of Layers

Working with extremely large characteristic scales

Individual Layer Parameters

Enabled/Disabled State

Bias

Noise Reduction Parameters

Noise Reduction Amount

Number of Noise Reduction Iterations

Smoothing Filter Kernel Size

Deringing Parameters

Deringing Amount

Deringing Threshold

Wavelet Scaling Function

Noise Thresholding Parameters

Noise Threshold

Thresholding Amount

Dynamic Range Extension Parameters

Low Dynamic Range Extension

High Dynamic Range Extension


PixInsight LE includes now the full version of the ATrousWaveletTransform process, just as implemented in the standard edition of the application as of the date of release. ATrousWaveletTransform is an extremely rich and flexible processing tool that you can use to perform a wide variety of noise reduction and detail enhancement tasks.

The à trous (with holes) algorithm of discrete wavelet transform is an elegant and powerful tool for multiscale (multiresolution) analysis of images. For references and background information on this and many other exciting, state-of-the-art image processing techniques, visit Jean-Luc Starck's website.

With ATrousWaveletTransform you can perform a hierarchical decomposition of an image into a series of scale layers, also known as wavelet planes. Each layer contains only structures within a given range of characteristic dimensional scales in the space of a scaling function. The decomposition is done throughout a number of detail layers defined at growing characteristic scales, plus a final residual layer, which contains the rest of unresolved structures.

By isolating significant image structures within specific detail layers, detail enhancement can be carried out with high accuracy. Similarly, if noise occurs at some specific dimensional scales in the image, as is usual in most cases, by isolating it into appropriate detail layers we can reduce or remove it without affecting significant structures.

ATrousWaveletTransform comprises two main sets of parameters to define the layered decomposition process and the scaling function used for wavelet transforms, respectively.


Global Layering Parameters


Scaling Sequence

This defines which range of characteristic scales is assigned to each detail layer. In PixInsight LE 1.0, the scaling sequence parameter can be either linear or dyadic:


Linear Sequence

When selected as Linear, the Scaling Sequence parameter is the constant difference in pixels between characteristic scales of two successive detail layers. Linear sequencing can be defined from one to sixteen pixels. For example, when Linear 1 is selected, detail layers are generated for the scaling sequence 1, 2, 3, ... Similarly, Linear 5 would generate the sequence 1, 6, 11, ...


Dyadic Sequence

Detail layers are generated for a growing scaling sequence of powers of two. The layers are generated for scales of 1, 2, 4, 8... pixels. For example, the fourth layer contains structures with characteristic scales between 5 and 8 pixels. This sequencing style should be selected if noise thresholding is being used.

Scale ranges are expressed in pixels. In general, however, scale ranges in wavelet decompositions shouldn't be interpreted literally, as if a given wavelet layer was containing image features measuring exactly the amount of pixels indicated. In wavelet processing, scales are relative to the particular scaling function used to perform a wavelet transform. Wavelet scaling functions play an essential role that we describe later on this document.

Original image

Scale of 1 pixel

Scale of 2 pixels

Scale of 4 pixels

Scale of 8 pixels

Scale of 16 pixels

An example of multiscale decomposition with the à trous discrete wavelet transform algorithm. Six detail layers have been generated with dyadic scaling sequence. The original image is shown at the beginning of the top row. Detail layers at increasing characteristic scales contain larger image structures. At the end of the sequence is the residual layer, which contains all of the remaining unresolved image structures.

Scale of 32 pixels

Residual layer



Number of Layers

This is the total number of generated detail layers. This number does not include the final residual layer, which is always generated. In PixInsight LE 1.0, you can work with up to twelve wavelet detail layers, which allows you to handle structures at really huge dimensional scales. Modifying large scale structures can be very nice when processing many deep-sky images.


Working with extremely large characteristic scales

Please note that the amount of memory required to perform an à trous wavelet transform increases rapidly as the size of characteristic scales grows. For example, with dyadic sequencing and 12 detail layers, you can work with scales of 2048 pixels. With our present implementation, for a 3000×3000 pixels image and a 3×3 scaling function, for example, such decomposition is just to the efficiency limit on a system with 1 GB of RAM (i.e., just before requiring additional virtual memory on disk, which slows down things dramatically).

Furthermore, take into account that to effectively apply transformations at large dimensional scales, the target image must be sufficiently large. For example, you probably couldn't preview a transformation defined for a scale of 512 pixels on a preview just 300 pixels wide.

Our present interface implementation does not prevent you to run out of scale with a wavelet transform. However, when this happens the processed image usually shows enough artifacts as to let you know that something is wrong without doubt.

Running a wavelet transform out of scale occurs when trying to modify a nonexisting dimensional scale. To the right is the result of enhancing the scale of 512 pixels for an image of just 245x400 pixels.

Original image

Scale of 512 pixels enhanced



Individual Layer Parameters

The following parameters are available on a per-layer basis:


Enabled/Disabled State

Individual layers can be disabled. A disabled layer does not enter the reverse wavelet transform used to generate the final resulting image. By appropriately disabling small-scale detail layers, an effective noise reduction can be achieved when the disabled layers don't include significant structures.

Another application of disabled layers is to isolate specific characteristic scales in order to extract some particular image structures of interest. For example, this can be effectively used to extract stellar objects with the purpose of building star-protection masks.

[mouseover: original image]
[mouseover: detail layers of 1 and 2 pixels disabled]
[mouseover: previous image after SGBNR noise reduction]
[mouseover: previous image after histograms adjustment]

An example of high-frequency noise reduction by layer disabling. The image, shown here zoomed 2:1, is a small crop of the average integration of four one-hour exposures on Kodak E200 film pushed +2.

By selecting an appropriate wavelet scaling function, and disabling detail layers corresponding to the smallest characteristic scales, high-frequency noise has been removed and significant structures have been preserved. Subsequent noise reduction at larger scales can be done efficiently with the SGBNR algorithm.


Bias

This is a real number ranging from –5 to +10. The bias parameter value defines a linear, multiplicative factor for a specific layer. Negative biases decrease the relative weight of the layer in the final processed image. Positive bias values give more relevance to the structures contained in the layer.

Original image

Scale of 1 pixel biased +2.0

Scale of 1 pixel biased +2.0
Scale of 2 pixels biased +0.5

Using the bias parameter to improve small-scale structures



Noise Reduction Parameters

ATrousWaveletTransform implements a multiscale noise reduction mechanism. For each detail layer, specific sets of noise reduction and detail enhancement parameters can be defined and applied simultaneously.

This gives great flexibility and accuracy to deal with both aspects of image processing. On one hand, noise can be surgically suppressed or reduced, because it can be isolated within the particular dimensional scales where it mainly occurs. Another great advantage is that noise reduction and detail enhancement parameter sets can be mutually optimized, since you actually see how each set interact with the other.


Noise Reduction Amount

When this parameter is nonzero, a special smoothing process is applied to the layer's contents after biasing. The noise reduction amount parameter controls how much of this smoothing is used. As a general rule, consider applying two or more iterations of a less aggressive noise reduction instead of a single iteration with a high amount value. See the Number of Noise Reduction Iterations parameter.


Number of Noise Reduction Iterations

Multiscale noise reduction can be defined as a recursive filtering process when the Noise Reduction Amount parameter is less than one, and more than one iteration is selected. This parameter governs how many smoothing iterations are applied. Extensive try out work is always advisable, but recursive filtering with two, three or four iterations and a relatively low amount value is generally preferable, instead of trying to achieve the whole noise reduction goal with a single, brute force iteration.


Smoothing Filter Kernel Size

This is an odd number defining the length of the square box used for layered filtering. When using this parameter, bear in mind that larger kernel sizes don't necessarily mean stronger or more efficient noise reductions. This is because filter kernel size here is actually related to noise scales, more than to smoothing strength. Kernel sizes of 3 and 5 pixels work well for most images.

[mouseover: original image]
[mouseover: small scales enhanced, no noise reduction]
[mouseover: same enhancement plus noise reduction]

A multiscale noise reduction example. Both detail enhancement and noise reduction tasks have been efficiently carried out in a single wavelet processing operation. This is possible because the corresponding parameter sets can be fine-tuned and mutually adapted with the interface resources provided by PixInsight.

The following noise reduction parameters have been used for the scales of 1 and 2 pixels:

• Amount = 0.33
• 3 iterations
• 3×3 smoothing filter



Deringing Parameters

Considered as a whole, when you use the ATrousWaveletTransform process for detail enhancement, what you are applying is essentially a high-pass filtering process. High-pass filters suffer from the Gibbs effect, which is due to the fact that a finite number of frequencies have been used to represent a signal discontinuity in the frequency domain.

On images, the Gibbs effect appears as dark artifacts generated around bright image features, and bright artifacts around dark features. This is the well-known ringing problem. Ringing is an extremely annoying —and hard to solve— issue in image processing. You probably have experienced this problem as black rings appearing around bright stars after unsharp mask or deconvolution.

However, ringing doesn't occur only around stars. In fact, you'll get ringing to some degree wherever a significant edge appears on your image and you enhance it, including borders of nebulae, galaxy arms, and planetary details, for example. In all of these cases, ringing is actually generating erroneous artifacts as a result of some limitations inherent to the numerical processing resources employed. Whether some ringing effects are admissible or not for a particular image is a matter of taste and common sense.

Our ATrousWaveletTransform implementation includes an efficient procedure to fix the ringing problem on a per-layer basis. It can be used for enhancement of any kind of images, including deep-sky and planetary, and can be fully controlled with just a couple of parameters in a highly interactive way:


Deringing Amount

This is a real number in the range from zero to one. A zero deringing amount disables the deringing feature. A value of one applies deringing at its maximum strength.


Deringing Threshold

With this parameter, you tell ATrousWaveletTransform when to start applying the deringing algorithm. A zero threshold value is the most aggressive setting and allows no ringing effect at all. Nonzero deringing threshold values are more permissive. This is a critical parameter. One must try to find a reasonable balance that allows good contrast enhancement without noticeable ringing artifacts.

The problem here is that if you avoid ringing too much, maybe your detail enhancement efforts will be of little or no use, or even you may get somewhat artificial results. On the other hand, too low of a deringing amount, or too high of a deringing threshold, will not fix ringing properly.

As a general rule of thumb, start with intermediate values: amount = 0.5 and threshold = 0.05 (default value), and see what happens. If nothing happens, try reducing threshold; for example, just halve its previous value (0.025). Perhaps you may need a very low threshold setting for some images. When you find a threshold value where deringing benefits become obvious, try tuning the deringing amount parameter. Don't discard having to readjust your layer biases as deringing starts to play.

[mouseover: original image]
[mouseover: enhanced, no deringing]
[mouseover: same enhancement plus deringing]
[mouseover: previous image + noise reduction with SGBNR + histogram stretching]

A deringing example. Without applying the deringing algorithm a lot of dark artifacts appear. Note that the ringing problem is not only present around bright stars, where it is indeed conspicuous, but also on extended areas surrounding nonstellar objects and bright regions of galaxy arms. PixInsight's implementation of à trous wavelet processing fixes this problem, allowing for full ring-free detail enhancement of deep-sky images.

Original M101 image by Volker Wendel and Bernd Flach-Wilken



Wavelet Scaling Function

With PixInsight's ATrousWaveletTransform process, you can specify a scaling function used for decomposition with the à trous discrete wavelet transform algorithm. The scaling function plays an essential role in advanced wavelet multiscale image processing. By appropriately tuning the shape and levels of the scaling function, you gain full control on how finely the different dimensional scales are separated.

In general, a smooth, slowly varying scaling function works well to isolate large scales, but it may not provide resolution enough as to decompose images at smaller characteristic scales. Oppositely, a sharp, peakwise scaling function may be very good isolating small scale image features such as high-frequency noise, faint stars or tiny planetary and lunar details, but quite likely it will be useless to work at larger scales, as the global shape of a galaxy or large Milky Way structures, for example.

A scaling function in PixInsight is defined as a kernel filter, that is, a square grid where discrete filter values are specified as single numeric elements. The à trous algorithm requires the scaling function to be a nonzero low-pass filter. However, PixInsight's interface does not impose specific limits in this and other regards with the purpose of giving you full freedom for experimentation and fun.

In PixInsight LE 1.0, à trous wavelet scaling functions are defined as odd-sized square kernels. Filter elements are real numbers. Most usual scaling functions are defined as 3×3 or 5×5 kernels.

The scale of five pixels, as defined by the à trous discrete wavelet transform algorithm, using different scaling functions and a linear scaling sequence with step of one pixel.

The functions used in this example are the default set of predefined wavelet scaling functions that PixInsight LE 1.0 generates automatically upon installation. For each example below, the corresponding kernel filter has also been included.

On the bottom row, from left to right, scaling functions are in decreasing order of smoothness. B3 Spline is a very smooth function, well suited for isolation of large image structures. Linear Interpolation is a good compromise to work at both small and large characteristic scales. Low-Scale is a sharply peaked function that does a good job isolating very small structures.

As this example shows, dimensional scales in wavelet transforms are relative to a particular scaling function, and in general they should not be taken in a literal way as if they were referred to actual sizes in pixels of existing image features.

Original image

5x5 B3 Spline

3x3 Linear Interpolation

3x3 Low-Scale

1/256

1/64

3/128

1/64

1/256

1/64

1/16

3/32

1/16

1/64

3/128

3/32

9/64

3/32

3/128

1/64

1/16

3/32

1/16

1/64

1/256

1/64

3/128

1/64

1/256

1/16

1/8

1/16

1/8

1/4

1/8

1/16

1/8

1/16

1/16

1/8

1/16

1/8

10

1/8

1/16

1/8

1/16



Noise Thresholding Parameters

Noise thresholding is a brute force, naive technique to reduce noise in images. It works by setting to zero (or to a conveniently low value) all pixels falling below a limit given in terms of a previously measured noise level. This thresholding noise level is derived from sigma (standard deviation) values through an iterative calculation, assuming a Gaussian distribution of noise over the whole image.

However, if we apply noise thresholding to some individual, small-scale detail layers after wavelet decomposition, the procedure becomes not so naive. The reason for this enhancement in cleverness is that sigma-based noise level estimates are much more likely to characterize true noise if we constrain its calculation to a small range of high-frequency image structures.

When activated, noise thresholding is applied to the first four detail layers. This technique will work properly (well, just as intended, actually) if you select the dyadic layering sequence.


Noise Threshold

This parameter is given in sigma units, that is, in terms of the standard deviation measured on each detail layer. Usual values are from 1 to 3 sigma, but you can play in the range from 0.1 to 10 sigma. Of course, higher threshold values will remove more image structures.


Thresholding Amount

The amount parameter modulates noise thresholding. This works multiplying all pixels below the specified threshold value by (1 — amount). For example, an amount value of 1 multiplies by zero and amount = 0.25 multiplies by 0.75. A zero amount disables noise thresholding.

In general, the layered multiscale noise reduction techniques included in our implementation offer much more efficient and flexible noise reduction than noise thresholding. However, this technique has other applications falling outside the noise reduction field, especially in mask generation.

[mouseover: original image]
[mouseover: 5s noise thresholding + suppression of large-scale wavelet layers]
[mouseover: final mask image after additional blurring, stretching, and inversion]

An example of noise thresholding applied to mask generation.

A severe noise threshold of 5 sigma, and the suppresion of large-scale detail layers from the à trous wavelet transform, have been applied to isolate stellar objects.

Large structures, as the whole M82 galaxy, have been completely removed by layer disabling. Noise thresholding has been used to remove very small structures while keeping small-scale detail layers enabled.

Original M82 image by Volker Wendel and Bernd Flach-Wilken



Dynamic Range Extension Parameters

When a nonzero bias parameter is applied to any layer in the ATrousWaveletTransform process, it increments or decrements (depending on the bias sign) the relative importance of that layer (i.e., of the features defined within that layer) in the final processed image.

This biasing procedure is a quite intuitive and efficient way of enhancing image structures at selected scales. However, a side effect of the biasing process is that some bright or dark image features may easily become saturated as pure white or black, respectively. Of course this may become objectionable in most cases.

Why do some image features get saturated? Because in order to improve details, but keeping the overall brightness of the image at the same time, contrast must be necessarily increased. This forces some areas to reach the upper or lower limits of the available dynamic range. How extense are these saturated areas depend on which dimensional scales have been enhanced.

Dynamic range extension works by increasing the range of values that are kept and rescaled to the [0,1] standard range in the processed result.


Low Dynamic Range Extension
High Dynamic Range Extension

You can control both the low and high range extension values independently. If you increase the high range extension parameter, the final image will be globally darker, but fewer white-saturated pixels will occur. Contrarily, if you increase the low range extension parameter, the final image will be brighter, but it will have fewer black-saturated pixels.

Any of these parameters can be set to zero (the default setting) to disable extension at the corresponding end of the dynamic range.

[mouseover: original image]
[mouseover: small-scale enhancement, no dynamic range extension]
[mouseover: same enhancement, high dynamic range extension = 0.4]
[mouseover: same as before, after midtones adjustment with HistogramsTransform]

In this example, enhancement of small-scale structures causes saturation of many bright areas. As a result of that saturation, some details have been lost in the highlights. High dynamic range extension fixes this problem. A subsequent midtones adjustment can be used to match the overall brightness to the original, if necessary.



::Index  <Prev  >Next