Dynamic Range and Local Contrast
Tutorial by Vicent Peris (PTeam/OAUV/CAHA)
Introduction
In this tutorial I describe some techniques I've been using on my latest images. The main goal of these techniques is to control the dynamic range and local contrast of an image of NGC 7023 (Iris nebula) acquired with the 1.23 meter Zeiss telescope at Calar Alto Observatory, in Southern Spain. This image is part of the Descubre/CAHA/OAUV/DSA collaboration. Data acquisition and processing have been carried out by Jack Harvey (SSRO), Steven Mazlin (SSRO) and Vicent Peris (OAUV).
Controlling the dynamic range of an image means to equalize the contrast of all the structures throughout all the dimensional scales. When this is achieved, no structure shows an excess of contrast compared to the rest of structures in the image. To implement this task, we are going to use mainly four PixInsight tools: HDRWaveletTransform, LocalHistogramEqualization, GradientsHDRCompression, and the MaskedStretch script.
First Dynamic Range Compression
We start with an already stretched (nonlinear) image. The midtones of the image have been adjusted to make the details in the darker clouds well visible. As can be seen on the image in Figure 1, this enhancement of the midtones has left the center of the nebula almost saturated.
The first step is obvious: to compress the dynamic range of the image with the HDRWaveletTransform tool (HDRWT). As we have a lot of fine detail, we use only five wavelet layers. We activate the luminance mask option to restrict application of the HDRWT algorithm to the most illuminated areas. The result can be seen in Figure 2.
High-Contrast, Small-Scale Structures
One problem inherent to all techniques employing kernel filters as scaling functions is that they cannot model high-contrast structures at very small dimensional scales. As a result of this limitation, the dynamic range of these structures remains uncompressed. This problem becomes evident for the stars, as you can see on Figure 3.
The MaskedStretch script has been written by David Serrano, with contributions from Andrés del Pozo, based on an original algorithm designed by Carlos Sonnenstein. The algorithm performs successive masked histogram transformations until it reaches a prescribed average background value, which can be specified as the target median parameter. The applied masks are generated dynamically at each iteration. One of the advantages of this algorithm is that it doesn't rely on any scaling function. As a result, it does an extremely good job at compressing the dynamic range of the high-contrast, small-scale structures that can't be modeled with multiscale techniques.
For the NGC 7023 image, we run the MaskedStretch script (MS) on the linear (unstretched) image and set its target median parameter to the median value of the stretched image after HDRWT (0.43 in this case). In this way we can achieve similar brightness and contrast for the whole image with both techniques. It is important to point out that we don't select any mask blurring method on the MS script, as that would alter the stars on the mask and hence would reduce the effectiveness of this technique for dynamic range compression. The applied settings can be seen on Figure 4, and the resulting image on figures 5 and 6.
The overall small-scale contrast of the image processed with the MaskedStretch algorithm is very low, as evidenced by Figure 5. We only want to use this result on the regions that actually can benefit from it: Those regions where HDRWT does not work properly. So if we use a mask to combine the MaskedStretch result with the nonlinear image after HDRWT, we can get the best of both techniques.
To build this combining mask we need to know the answer to the following question: Where are the largest differences between the HDRWT and MaskedStretch processed images? In other words, we are going to calculate the difference between both images, so the mask will protect the areas where both images are similar. This can be done with the PixelMath tool. We operate with the lightness components (CIE L*) of both images. The result of MaskedStretch will be subtracted from the HDRWT image. The resulting mask can be seen on figures 7 and 8.
This mask also selects a lot of structures on the core of the nebula. Combining the images in this way would lead to a loss of contrast in these structures. To prevent this problem we are going to modify the mask. As we are interested only in small structures, the large-scale structures can be erased very easily with the ATrousWaveletTransform tool by disabling the residual (R) layer (Figure 9).
The final mask adjustments are done with the MorphologicalTransformation and ATrousWaveletTransform tools. We first apply a dilation filter in order to extend the mask to cover the external areas of the stars. Finally, we smooth the mask a little bit by removing the first wavelet layer. These processes are described on figures 10 and 11.
Once we have finished the mask, we are going to use it to fix the stars. This process consists of transferring the small scales of the image processed with MaskedStretch to the HDRWT processed image. First we create working duplicates of both images. Since these duplicates will only have the medium- and large-scale structures, we'll assign the MS_LS and HDRWT_LS identifiers for clarity. Now we remove the first five wavelet layers of MS_LS and HDRWT_LS (Figure 12). This procedure results in the four images that you can see on Figure 13.
To transfer the small scales of the image processed with MaskedStretch (MS) to the HDRWT image, we use the following PixelMath expression (Figure 14):
MS - MS_LS + HDRWT_LS
Speaking in multiscale terms, the above expression replaces the large-scale components in the MS image with the corresponding components of the HDRWT image. This process must be masked with the mask we have built in the preceding paragraphs. This mask has very low pixel values (0.1 to 0.2 on the star edges), so we can apply the same PixelMath instance iteratively to control the intensity of the process. This allows us to adjust the desired amount of dynamic range compression on these small-scale structures. In Figure 15 we can see how the stars recover their natural Gaussian profiles throughout successive PixelMath iterations.
Local Enhancement
After applying the HDRWT algorithm the structures in the core of the nebula have been recovered, but this area still has a very low local contrast. When this happens, the LocalHistogramEqualization tool does a very good job. LocalHistogramEqualization is an implementation of the Contrast Limited Adaptive Histogram Equalization algorithm (CLAHE), written by Czech software developer and PixInsight user Zbynek Vrastil.
Before applying LocalHistogramEqualization (LHE), we need to protect the stars to preserve their shapes. We are going to mask the LHE process through a modified version of the star mask we created in the preceding section. To modify the mask we use the MorphologicalTransformation, ATrousWaveletTransform, HistogramsTransformation and CurvesTransformation tools. This is well exemplified in Figure 17.
Once we have adjusted the mask we can apply LHE to the image. We load the mask and invert it, so the stars remain protected. In the LHE process, we have used the default kernel radius and amount parameters. The contrast limit parameter has been set to 1.7.
Another problem arises now: lack of color saturation. This is a downside of the techniques that enhance the image locally, as they decrease the color saturation at larger scales in favor of enhancing hues at smaller scales. This can be corrected easily with a simple curve applied to the S component with the CurvesTransformation tool (see figures 19 and 20).
Finally, at this stage it is always very useful to apply an S-shaped curve to the combined RGB/K channel with CurvesTransformation. This enhances contrast on the brightest areas. We apply the curve through a luminance mask, as described in figures 21, 22 and 23.
Deringing
Both HDRWaveletTransform and LocalHistogramEqualization can generate ringing artifacts around very high contrast structures at medium and large dimensional scales. This is clearly shown for the image of this tutorial in the dark lanes around the central star, on Figure 24.
Although HDRWT has its own deringing parameters, we are going to fix these ringing artifacts manually. One way to correct these artifacts is with PixelMath. To know how much ringing has the image we can calculate the ratio of the pixel values between the original image and the one processed with HDRWT and LHE. In Figure 25 we can see the applied PixelMath instance.
As shown on Figure 25, we use the PixelMath expression:
raw/(proc + 0.05)
to calculate the ratio between the linear (raw) and processed (proc) images for each pixel. We add a small pedestal of 0.05 in order to prevent divisions by values very close to zero. The Rescale result option is enabled and we send the output to a new image with the "dr" identifier. The result is a ringing map, that is an image showing where ringing artifacts occur, which you can see on figures 26 and 27.
Before applying the ringing map to the image we can improve its efficiency by increasing the isolation between the problematic structures and the rest of structures in the image. This can be easily done with CurvesTransformation, as shown on Figure 28.
Now we can correct these structures with PixelMath. We add the modified ringing map, multiplied by a scaling factor (0.15 in this case), to the already processed image. See figures 29 and 30.
The result, as shown on Figure 30, still has very dark structures, but this will be further corrected in the next section of this article.
Gradient Domain Operations
To conclude the dynamic range related processing techniques, the last step is carried out by operating in the gradient domain. This step will decrease the contrast of some structures around the center of the nebula. It will also help in fixing the remaining ringing artifacts completely. These structures are still problematic because they have too much contrast in comparison with the rest of the visual information in the image. In this section we are going to use the GradientsHDRCompression tool (GHC), written by Georg Viehoever. Gradient operations, when used for dynamic range compression, use to be tricky and should be applied with caution. One drawback is that they tend to generate opaque stars. This problem can be seen on Figure 31.
Consequently, this process must be applied with the protection of a star mask. We are going to use the same modified star mask we applied before, again inverted in order to protect the stars.
Now if we look at the entire image, we can see that it has been darkened globally. We are going to raise a bit the midtones of the image with HistogramTransformation, but we have to apply this again through the inverted star mask in order to modify only the processed areas of the image.
Final Processing Steps
After applying dynamic range processing techniques —especially when we apply those techniques that affect small-scale structures— we usually need to recover the sharpness of the image. This can be done with the ATrousWaveletTransform tool, by applying positive biases to the 1- an 2-pixel scales. Here we also need a lightness mask to protect the image from excessive noise intensification on the lower signal-to-noise ratio areas.
Final Thoughts
PixInsight has a flexible toolset for controlling the dynamic range of your images. Usually the HDRWaveletTransform tool gives very good results for this task. However, to achieve a superior result we need to go beyond this tool. In this article we have used several tools, putting into practice very different approaches to the same problem. The key of these procedures is to apply each tool when and where it is needed.
It is equally important to understand that the application of these techniques will ease most subsequent processing steps. From a sharpening process to a simple curve, all of these processes will be more powerful and efficient if the contrast of our image has been previously diminished at a wide range of dimensional scales.