PixInsight LE Tutorial
Multiscale Processing of Deep-Sky
Images with PixInsight: M33 POSS II Image
By Vicent Peris (PTeam)
:: PixInsight Home Page
Initial Chromatic Balance Adjustment
Defining a Custom RGB Working Space
Extracting the Luminance
Extracting Large-Scale Structures
Enhancing Large-Scale Structures
Extracting Small-Scale Structures
Matching Illumination over the Entire Image
Enhancing Small-Scale Structures
Merging the Large-Scale and Small-Scale Images
Analysis of Results
Final Processed Image
Astronomical images are complex and delicate objects. Not unfrequently, we find that some changes that we make to improve an image in some aspect, tend to degrade it in others. In many cases, those techniques based on divide and conquer strategies can be the most efficient ones, because they allow us to treat some image structures and facets individually, without affecting the rest. In this tutorial, we'll describe such a procedure applied to process the luminance of a region extracted from the Palomar Sky Survey, centered on Messier 33.
Here we propose a solution based on the �trous (with holes) algorithm of discrete wavelet transform (J.-L. Starck, F. Murtagh, Astronomical Image and Data Analysis, Springer-Verlag, 2002). Basically, our work will consist of dividing the image into two parts: one containing large structures, and a second one with relatively small structures. With the help of this procedure, we'll be able to extract the maximum possible amount of information from each facet of the image, and at the same time, our expressive capabilities will be greatly improved. Below is a sample of the final result.
Figure 1 shows the result of combining the original B, V and R POSS plates. As can be seen, the chromatic balance of the resulting image is extremely wrong, mainly due to the fact that V plates have been much less exposed than B and R plates in the POSS. Considering that photographic emulsions have wildly nonlinear responses, a complex transfer curves adjustment is required to achieve the correct chromatic balance. Figure 2 shows the applied curves and its result, which will be our starting image to extract and process the luminance.
Before extracting the luminance, we must define an appropriate RGB working space (RGBWS). This preliminary step is very important. In PixInsight, we can define a custom RGBWS with optimized luminance coefficients that will give us as much information as possible represented in the luminance. Color-management oriented RGB spaces, such as sRGB or Adobe RGB for example, are not efficient choices for luminance/chrominance separation of deep-sky astronomical images. This is because these spaces confer too much importance to the green color for the calculation of the luminance. For example, in the sRGB space, the green luminance coefficient is about ten times greater than the blue coefficient, and some six times greater than the red coefficient. This does not correspond to the true importance of these colors in terms of the information that they transport in deep-sky images, as our present M33 image.
With the RGB Working Space Parameters process in PixInsight, we'll define a custom RGBWS where the three RGB colors have equal weights for the calculation of luminance pixel values (Figure 3).
Once the new RGBWS has been assigned to the image, we can extract the luminance with the Extract Channels process (Process > Color Spaces > Extract Channels). First we must select the desired color space, CIE L*a*b* in this case. We uncheck the a* and b* chrominance channels since we don't need to extract them. Only the Luminance check box must be checked. We can also assign a custom image identifier to the extracted luminance, by unchecking the Auto Identifiers check box. In our case, we used the "Luminance" identifier, as seen on Figure 4, along with the resulting luminance.
Now we are ready to start processing this image. Our procedure will consist of three phases:
Our work on large structures will be relatively simple in this case. We start by isolating large-scale image components. This can be achieved very easily by removing all wavelet planes (or layers) below the scale of 64 pixels. The �Trous Wavelets process of PixInsight allows us to do this by disabling the corresponding layers, as shown on Figure 5.
On Figure 5, note that we have selected the dyadic scaling sequence. With this sequence, successive wavelet layers have characteristic scales growing by powers of two: 1, 2, 4, 8, 16, and so on. Also note that we have increased the amount of layers up to seven, in order to reach the scale of 64 pixels. To disable a wavelet layer, just double-click the corresponding cell on the "Level" column to turn the green check mark into a red cross mark. When the inverse wavelet transform is performed (which is done automatically by the A Trous Wavelets process), disabled layers are simply discarded and not included in the processed result.
Now it's time to recall that we must alway save all of our work as process icons in PixInsight. In this way, we'll be able to reproduce our processing steps later.
In PixInsight LE, the �Trous Wavelets tool requires a lot of memory when we define procedures at very large scales. It is quite easy to exhaust all of the available RAM when working at scales of 64 pixels and greater, with large images. If this happens, an easy solution is to subsample the image by powers of two. This is possible because the wavelet transform is a linear operation with respect to dimensional scales. For example, if you resample your image to half its original size, its scale of 32 pixels in a wavelet transform will be equivalent to the scale of 64 pixels on the original. So you can use the Integer Resample process in PixInsight LE to subsample 1:2, apply a wavelet transform disabling all scales up to 32 pixels, and then oversample the image 2:1. The result will be the original scale of 64 pixels. Of course, this is feasible because very large scales are extremely smooth functions, which can be strongly resampled without significant degradation. The wavelet transform processes implemented on the new PixInsight Standard application don't have any of these memory consumption problems.
Once we have extracted the large-scale structures, our next step is to enhance them. The extracted large-scale image is strongly dominated by the spiral structure of the M33 galaxy. In what follows we'll see how this spiral structure can be considerably enhanced in the final result.
We begin by increasing the relative weight of the scales of 128 and 256 pixels. The relative weight (or importance) of a wavelet scale can be modified with the Bias parameter of �Trous Wavelets. It is convenient to extend the effective dynamic range of the image during this wavelet transform (Dynamic Range Extension parameters), to prevent histogram clippings. On Figure 6 you can see the applied �Trous Wavelets process and the resulting image.
It may seem that the previous procedure has not been too efficient. However this is an illusion that disappears as soon as we clip the unused portions of the dynamic range with Histograms (Figure 7). The Auto Clipping feature can be useful here. The large unused portions tell us that we used too large Dynamic Range Extension parameter values. Better (lower) values could be used, but this doesn't affect our result.
As you see, we have achieved a substantial improvement of the spiral arms with just a small +0.2 bias on �Trous Wavelets. However, we can do much better. A problem that we have now is the excessive difference in brightness between the internal and external parts of the spiral structure, which makes enhancing local contrast a very difficult task.
To fix this problem, we can resort to, so to say, more traditional methods. To a copy of the large-scale image before processing it (that is, what is shown on Figure 5), we apply a wavelet transform that suppresses the scales up to 256 pixels. This basically yields the global illumination profile of the galaxy. After clipping the histogram to rescale pixel values to the entire dynamic range, we obtain the image shown on Figure 8. This image will be used as a mask for the resulting image after Figure 7. With this mask active, we can decrease the midtones for the brightest areas only (Figure 9), which reduces the global contrast. Finally, we can increase brightness for the whole galaxy by means of a curves adjustment, this time with the mask disabled, as shown on Figure 10.
As can be seen on Figure 10, the inner regions of M33 are still lacking contrast. We'll increase contrast for the structures just on these inner regions by enhancing large-scale wavelet layers, but using the same mask that we have developed in the previous step (Figure 8). This time we cannot make use of the Dynamic Range Extension parameters of �Trous Wavelets to prevent histogram clippings. This is because the process will be applied through a mask, so the dynamic range extension mechanism would only act for regions where the mask is white, which is not what we want. Fortunately, we can expand the dynamic range with Histograms in PixInsight, as shown on Figure 11. Note the Low and High parameter values of -0.2 and +1.5, respectively. This histogram adjustment is applied unmasked (that is, with the mask disabled).
Now we can safely increase bias for the wavelet layer at the scale of 128 pixels, with the mask enabled (the mask is shown on Figure 5), which will improve the constrast of the structures in the inner regions of M33. The result is on Figure 12. Once again, we'll have to rescale pixel data to occupy the entire dynamic range, as Figure 13 shows. This is done with the mask disabled.
When we started our large-scale processing we isolated the largest structures in the image. This time we want to do just the opposite: isolate small-scale components. To this purpose, we don't have to apply a new wavelet transform: since we already have the large-scale structures isolated as a single image, we can simply subtract it from the original luminance, which will yield a new image containing just the small-scale structures. This can be carried out with the Pixel Math process in PixInsight LE. Figure 14 shows the corresponding setup. This Pixel Math process must be applied to the original image.
Since the Create New Image option of Pixel Math has been activated, the result will be obtained as a new image. Note the SUB (Subtract) operator and the LargeScale image operand. This LargeScale image is the result that we obtained on Figure 5. It is essential to have the Rescale option active, since it will guarantee that no data is lost, by rescaling the result after subtraction to the normalized range from 0 to 1. Figure 15 shows more clearly how this simple operation works.
The surviving objects after this operation are stars, emission and dark nebulae, and in general all of the small structures present in the original image. This is just what we want: all small-scale structures isolated into a single image.
The next step is perhaps the most delicate part of the whole processing. If you look carefully at the small-scale image (Figure 15, bottom), you'll become aware that there is a central region, where the original image was brighter, that has been assigned a reduced illumination profile. This is a logical consequence of subtracting the large-scale components. Besides that, the structures in the core of M33 have reduced contrast with respect to the structures on the spiral arms, due to the applied gamma adjustments and to the nonlinear response of the photographic emulsion. Our next goal is to match contrast and illumination profile for the structures over the entire image.
To achieve that goal, a mask is required. This mask will be a copy of our large-scale image (Figure 5) after a histogram adjustment. On Figure 16 you can see the applied adjustment and the resulting mask. This mask will be activated for the small-scale image to apply a histogram transform, consisting on dragging the white clipping point to the left, until we see that the illumination becomes uniform for the whole image. To perform this critical adjustment, the Real-Time Preview system of PixInsight LE is of great help, as seen on Figure 17.
With this histogram adjustment applied through a mask that represents large-scale structures, we are changing just those regions that are more illuminated on the original image. In this way, the procedure tends to recover contrast on these bright regions. Figure 18 is a mouseover where the true effect of this step can be verified.
This step is extremely important. Without implementing it appropriately, we'll not achieve uniform contrast and illumination on the final image.
As we have the small-scale image now, we can dominate the contrast at our will throughout the entire image. This greatly simplifies our work, and gives us a tremendous processing power, which is unthinkable by processing the image as a whole, without isolating structures at different scales. As an example, let's see on Figure 19 what a simple curves transform can do with our small-scale image. Note how we can improve contrast for the dark nebulae around the core and the star clouds and emission nebulae on the outer spiral arms, all at the same time in a single operation.
For the present example we have not applied any additional processing to this M33 image. Of course we could process it much further. For instance, we could apply a mild deconvolution, improve more structures with wavelets, or perform a noise reduction.
Having processed both facets of the image apart, our last step is to recombine them to obtain a final result. We do this with Pixel Math in PixInsight LE.
The obvious way seems now a straight addition of the large-scale and small-scale images, rescaling the result to occupy the whole available dynamic range. However, this is not a good idea. Take into account that by processing isolated structures separately we have unbalanced them with respect to the original image, in terms of relative brightness. In other words, at this point nothing guarantees that by simply adding both images we are going to obtain a reasonable result.
What could help us in recovering the original overall aspect of the image in a consistent way? Fortunately, the answer consists in something that we have readily at hand: the original luminance. We are going to combine both processed images (isolated large and small scales) over the original luminance. With the original data as a basis, we have an immense range of possibilities to yield a final processed image. As we'll see now, Pixel Math is a very creative tool in PixInsight, which will help us to obtain a result that fully express what we want.
On Figure 20 we see the Pixel Math interface ready to produce our final image. This Pixel Math process must be applied to the original luminance. Note the active Rescale option, which is indispensable. Note also that we can assign different relative weights to each operand image. In this case, we have doubled the weight of the small-scale image (note the K:+2.0 parameter value). Within reasonable limits, relative weights can be varied to meet the desired result, as a matter of personal preferences.
Figure 21 depicts the combining procedure and shows the obtained result.
A further refinement is to repeat a similar procedure with the help of a mask. Since the structures over the core are the most difficult to enhance, we'll perform a second combination over the resulting image of Figure 21. This new combination is identical to the previous one, but it is applied through the mask that we obtained in Figure 16. This will improve contrast for the brightest regions of the image. Figure 22 is a mouseover that shows the result.
From the galaxy core to the outer regions, the contrast gain is very significant for small-scale structures, as shown on Figures 23 and 24.
Compared to the original luminance, large-scale enhancements are conspicuous on Figure 25.
Finally, Here is is a link to the final processed, color image.