Hi Harry, Sander and Rob,
Lets take it slowley and start with bias it can be a negative or positive number what am I doing by setting this to each layer
OK let's keep this atrous thing at bay
First let's review some important facts about wavelets and wavelet transforms:
- A wavelet layer can be understood as a description of a range of image structures with similar size. We call this a
scale, but the word size is more descriptive and sufficiently accurate for our purposes.
- A wavelet transform is a
decomposition of an image into several wavelet layers. Such decomposition allows you to
isolate different image structures as a function of their relative sizes. You can think of a decomposition as a classification of image structures by their sizes. In the à trous wavelet transform algorithm, layers are defined following a
dyadic scaling sequence: 1 pixel, 2 pixels, 4 pixels, ..., 2
n-1 pixels.
- A wavelet transform is composed of a set of
detail layers and a final
residual layer. The residual layer contains all image structures larger than the largest structures in the last detail layer. What we have in the residual layer is only
large-scale image features. Usually, when a sufficient amount of detail layers is generated, the residual layer just defines the
overall illumination pattern of the image being decomposed.
- A very important property of the à trous wavelet transform algorithm is that the inverse transform is just the sum of all the detail layers plus the residual layer. If you decompose an image and then sum all the layers, what you get is exactly the initial image (neglecting small roundoff errors). You can remove one or more detail layers, that is, exclude them from the inverse transform by not summing them. When you do that, the excluded layers also remove their contained structures from the image. We usually do this to isolate certain image features that we are interested in for some particular task. For example, you can implement a good star mask generation procedure very easily by removing the residual and first layers. Here's an example:
http://pixinsight.com/examples/M45-sonnenstein/en.htmlLook at Figures 10, 11 and 12, and read the text around them.
- Another very important property of à trous is that the transformation is
redundant: each layer is an image whose dimensions are equal to those of the decomposed image. This is important because, among other nice things, it allows us to work very accurately with the same coordinate system over all wavelet layers.
Now that you know the important facts let's see what we can do with all this stuff. Quick answer: everything. The multiscale paradigm is a completely new way of understanding image processing as a whole. With wavelets you can implement from sharpening to noise reduction, from structure detection to image registration, from texture classification to optical character recognition. Well,
almost everything; there are specific tasks that require other mathematical structures that are far more complex, such as curvelets, ridgelets and more
-lets, but this is another story (which I want to explore in PixInsight in the future, time and health permitting).
Wavelets are almost omnipresent in PixInsight. You'll find them in many important tools such as ATrousWaveletTransform, HDRWaveletTransform, Deconvolution or StarMask, and others where their use is less obvious such as StarAlignment (star detection and classification), ImageIntegration (noise evaluation), ImageCalibration (dark frame optimization) or LRGCombination (chrominance noise reduction).
But let's return to the topic. As I've said you can disable one or more layers, which just removes them from the inverse transform. For example, the first wavelet layer usually contains most of the small-scale noise (or high-frequency noise) in the image. This happens because small-scale noise occurs mainly at the scale of one pixel, since it is composed of pixel-to-pixel variations. Hence you can disable the first wavelet layer to remove all the small-scale noise in a single operation. Unfortunately, you often can't do that so naively because sharp edges are also small-scale structures, and are also described at the scale of one pixel.
The
bias parameter of ATW gives you better control over individual wavelet layers. It allows you to enhance/sharpen (by increasing it) or reduce/blur (by decreasing it) all image structures within a particular wavelet layer. This allows you to change just a set of image features, isolated as a function of their sizes. For example, you usually don't want to increase bias for the first wavelet layer, as doing that would increase the amount of noise in the image horribly. You usually want to increase bias for the second and third layers because they contain little noise and most of the interesting features in the image.
Bias works as a multiplicative factor for a whole wavelet layer. However, instead of implementing it as a simple number that multiplies a layer, I have implemented a
balance parameter: when bias is zero, no change occurs (as if the layer were multiplied by one), When bias is positive, all the structures in the layer are multiplied by a number greater than one, and hence they will have more
weight in the inverse transform. When bias is negative, the layer is multiplied by a positive number less than one. With this system I can isolate you (the user) from some complexities of the internal implementation, and the balance concept is more intuitive IMO.
The Gibbs phenomenon, AKA
ringing, is interesting from a physical and mathematical point of view, but is one of the worst nightmares in practical image processing. When you increase bias, ringing artifacts will appear, inevitably, around all jump discontinuities in the image such as bright stars, bright nebular features, etc. We have implemented a very efficient deringing mechanism for ATW to palliate this problem. Here is an example:
The original image:
http://forum-images.pixinsight.com/1.5-preview/ATW-deringing-1.jpgA relatively strong bias applied to the second wavelet layer, no deringing:
http://forum-images.pixinsight.com/1.5-preview/ATW-deringing-2.jpgSame bias, deringing enabled:
http://forum-images.pixinsight.com/1.5-preview/ATW-deringing-3.jpgThe same algorithm has been implemented in other tools, such as UnsharpMask and Deconvolution. Here is a nice example with deconvolution:
Original image:
http://forum-images.pixinsight.com/1.5.2-preview/DeconOriginal.jpgWithout deringing:
http://forum-images.pixinsight.com/1.5.2-preview/DeconNoDeringing.jpgWith deringing:
http://forum-images.pixinsight.com/1.5.2-preview/DeconWithDeringing.jpgThe noise reduction algorithm that I have implemented in ATW is also quite efficient. It has a unique feature that is also a big advantage: it is the only noise reduction algorithm that works with linear images. An example:
Raw (linear) RGB composite image, before noise reduction:
http://forum-images.pixinsight.com/1.5-preview/ATW-noise-reduction-1.jpgAfter multiscale noise reduction with ATW:
http://forum-images.pixinsight.com/1.5-preview/ATW-noise-reduction-2.jpgSomething that will probably surprise you is that the above noise reduction has been applied
without any protection mask. Pay special attention to the preservation of star colors and subtle details on the main galaxy.
There are much more fancy things that can be done with the ATW tool, but I think this is OK as a general introduction.