MultiscaleMedianTransform + HDRMultiscaleTransform - M81/M82 by Harry Page

Juan Conejero

PixInsight Staff
Staff member
Some time ago I published a little processing tutorial with a very nice and deep image of the M81/M82 region by Harry Page, using mainly the ATrousWaveletTransform tool (for noise reduction) and HDRWaveletTransform (for dynamic range compression). To put the following example in perspective, I strongly recommend you take a look at that tutorial on this forum post that I wrote in June.

We'll try now to achieve the same goals with the new MultiscaleMedianTransform tool. We'll use also the new HDRMultiscaleTransform tool, the successor to HDRWaveletTransform.

Let's start by loading the original, linear image:


In this example we'll use a mask to apply noise reduction where it is more required. The mask will allow us to modulate noise reduction as a function of signal-to-noise ratio: the brightest, high-SNR areas will be protected while the dark, low-SNR regions such as the sky background and transitions between the sky and the objects will receive proportionally more noise reduction. In our example, the noise reduction mask is simply a stretched duplicate of the original image. In the following screenshot you can verify mask protection on a preview of interest.


The mask has been stretched by transferring ScreenTransferFunction's AutoStretch settings to the HistogramTransformation tool. This is the same preview with mask visibility disabled:


Recall that we are working with the linear image; it is just the mask what we have stretched, and the image is visible on the screen thanks to the ScreenTransferFunction tool (STF).

Now without more preambles, let's see MultiscaleMedianTransform in action:


To evaluate the result, you should see and compare the above two screenshots at full resolution (click on the thumbnail images in this post). I would say that the result is somewhat better than what we achieved with wavelets. It has been easier and more controllable too. Both the ATWT and MMT tools are particularly well suited for noise reduction in images like this one.

We repeat the same test with a larger preview covering both main objects. This is the original image:


And this is after noise reduction with MMT:


After noise reduction, I want to introduce you the new HDRMultiscaleTransform tool. This is the successor to the HDRWaveletTransform tool, which will no longer be available as a standard PixInsight tool (the HDRWaveletTransform process will continue being available under the Compatibility category, however, to provide support for existing projects and scripts).

HDRMultiscaleTransform is exactly identical to HDRWaveletTransform, except that it now provides the possibility to select among two different transformations to perform the dynamic range compression task: the ? trous wavelet transform (as before) and the multiscale median transform. Other than this addition and a few minor bug fixes, the underlying HDR compression algorithm is the same one that was conceived and designed by PTeam member Vicent Peris several years ago. Don't worry, the good old things only change to be improved in PixInsight ;)

As you know, the current dynamic range compression tools in PixInsight only work with stretched images. This is the HistogramTransformation instance that we have applied:


and this is the resulting MMT-denoised and stretched image:


Here is the result after applying HDRMultiscaleTransform with a wavelet-based transform and five wavelet layers:


This is after HDRMultiscaleTransform with a median-based transform and six wavelet layers:


As you can see, the results achieved and the meaning of the layers parameter are very different. The median-based transform tends to be more aggressive at smaller scales. Is a median-based HDRMT better than a wavelet-based HDRMT? Not at all in our opinion; they are just different in their behavior and characteristic features, and achieve different goals. They provide different interpretations of the data, equally valid, as a result of different solution paths to the dynamic range compression problem.

As you might expect, the result of the median-based HDRMT has no ringing artifacts. This is a big improvement with respect to the wavelet-based result (compare the two screenshots above). This is a consequence of the absence of ringing in the multiscale median transform. However, you should keep in mind that the MMT has some drawbacks associated with the implemented structuring elements, mainly generation of artifacts around corners, which may also be present in median-based HDRMT processed images. We already have exemplified these problems in a previous tutorial.

The aggressiveness of median-based HDRMT in terms of overall contrast reduction can be controlled by simply working at larger dimensional scales. Below is the result with 8 median-based layers. Of course, there is no ringing.

Before HDRMT:


After HDRMT, median-based, 8 layers:


For completeness, this is the result after three iterations of the above HDRMT instance (with inverted iterations enabled):


Thanks again to Harry Page for allowing us to work with this wonderful image.
 
Juan,

Could you go into a bit more depth on how you created the mask please? What I did, and think is wrong, is did a histogram stretch on the image, created the mask, and then threw the histogram stretched image away just keeping the mask. I guess I'm not using the STF tool as effectively as I could but I only have been using it to "see" a screen stretch of my linear images and have just been using the default settings. They have been "good enough" for using the DBE and dynamic crop tools so I haven't really bothered using for anything else. For luminance images I usually star align, combine the images, use STF to "see" the image and use a crop tool to crop the uneven edges resulting from dithering my images and perform a DBE on the image, saving after each step. At this point I will usually do a HDRW on the image if desired after having made a mask as stated above. Apply the mask, invert it and then hide the mask, the applying HDRW as needed with deringing applied. At that point I apply HST to taste and save.

After this new post of yours I did indeed try using MutiscaleMedianTransform to reduce the noise first trying the settings as in the posted example and then realizing that there was a preview window so I could see the effect and decided to try my own settings and number of layers. This certainly made a huge difference in my luminance image that has 5 hours of total data (20 minutes x 15) which I thought was pretty smooth already. The data is 2.5" or less and from my area that's not bad. I threw the remaining data away, about 6 ST10 images. I can post a before and after image but would rather get your take on the proper process or maybe better worded, "preferred process" of creating these masks.
 
Another question comes to mind as well now. Usually I process my luminance data separate from my RGB data. After processing the luminance data I'll process the RGB all using the before mentioned processes and then combine my luminance with the RGB. These days I've been shooting all data unbinned so all data is star aligned from the beginning.

Is there a better way to do this? Should I do some processes to the LRGB combined data versus each to its own, luminance and RGB before combining? As is now I combine the RGB, process it to taste, ditto with luminance data, and then use LRGB Combine to add the data together after aligning the two images. Any advantages or disadvantages to the separate processes before combine? I'm thinking along the lines of noise reduction, sharpening, wavelets, and so on. I'd be very interested to hear how people process their data when using a mono camera with filters but I'd like to hear from the pros about their thoughts on this.
 
Hi Steve,

Could you go into a bit more depth on how you created the mask please?

It has been just as simple as it can be: I duplicated the linear image and applied HistogramTransformation with settings imported from automatic STF. I then selected the stretched duplicate image as a mask for the original. To transfer STF to HT:

1. Open the ScreenTransferFunction tool.
2. Select the original image and apply AutoStretch (click 'A' button, or press Ctrl+A, Cmd+A on the Mac)
3. Click the blue triangle in STF and drag it to HT's bottom control bar.
4. Apply HT to the mask image.

This is a useful trick that allows you to get good stretching parameters with a couple clicks. If necessary, you can tweak HT to optimize the mask. For example, you can clip the shadows somewhat to increase noise reduction on the background. However in this example I have just followed the four steps above to keep things as simple as possible, and I think the results are very good.
 
Should I do some processes to the LRGB combined data versus each to its own, luminance and RGB before combining?

In general you should process the LRGB combined image as a whole. Most PixInsight tools allow you to process the lightness component of an RGB image in a transparent way (transparent here means that you don't have to extract lightness, process it, and reinsert it): look for 'Target' combo boxes on tools such as ATrousWaveletTransform, MultiscaleMedianTransform or UnsharpMask, or  'To Lightness' check boxes on tools such as HDRMultiscaleTransform. The CurvesTransformation tool also allows you to define curves specific to lightness and other components of several CIE color spaces. The advantage to work in this way is huge: you can evaluate the result on the actual RGB image without having to figure out how the processed lightness will work when you combine it with the chrominance: PixInsight does the necessary calculations and transformations for you on the fly.

Since LRGB images are nonlinear by nature, a notable exception to this rule is deconvolution. The Deconvolution tool only makes sense for linear data, so you have to apply it to the luminance component of the image before the LRGB combination. There are ways to avoid this 'problem', which involve relinearization (nice word isn't it) of the LRGB data, but they are not justified in practice IMO.

There are other steps that require linear data, such as gradient correction (not actually a requirement, but DBE works better with linear images) and color calibration (no color calibration is possible with nonlinear data), and hence must be applied before LRGB combination to L and RGB separately, as appropriate.

In case you already haven't done it, I strongly encourage you to watch our latest video tutorial on this subject (two videos). You have it on our website:

http://pixinsight.com/videos/NGC1808LRGB-vperis/en.html

and also on our YouTube channel:

http://www.youtube.com/user/PixInsight

In the second part of this video tutorial the HDRWaveletTransform tool (now HDRMultiscaleTransform) is applied to the separate nonlinear L and RGB images. This can be useful to prevent saturation of bright areas, but HDRMT can also be applied after LRGBCombination with similar results.

Hope this helps.
 
This is all very good information and extremely helpful. One thing I did notice and it's not covered on these videos is when to crop the combined images. As a matter of process I've been cropping the RGB right after combining the individual R, G, & B images. I dither my guiding and this, along with slewing back and forth due to periodic focusing (usually every 2 hours), the frames will not exactly line up. So the borders of the combined RGB image and the combined luminance will have slightly different orientation. All subs are aligned to the same single image, usually the first of the image series from the first night's run. Long story short is there is always an uneven boarder as seen in the two attached images: Image01-RGB.jpg &Image02-Lum.jpg

Now when I use Dynamic Crop to get rid of the uneven boarders I'll save the image and continue my processing. Repeat with the luminance data knowing that the resulting cropped image will be different than the cropped RGB image. After completing my processing of these two: DBE, Background Neutralization (RGB), Color Balance (RGB), HST, HDRW and so on and then saving each I will then combine the two images using the LRGB Combine tool after using the dynamic alignment tool. Now what I've seen is that while the centers of the images are fine, and these are aligning and finding the proper correlating stars in each image, it appears that the RGB registered image is warped resulting in some of the outer star regions not aligning properly and giving a ghost effect. At least that what it seems to be doing sometimes but not always. I just tried again with this data set and didn't see the issue so maybe a bad registration between the previous attempt but this stills begs to ask the question as to when is it best to crop the images, before combining the luminance with the RGB after all other processing is done or after? Keeping in mind that certain processes could be different such as DBE using an auto populated model using xx number of samples per row or any process that uses the total area of the image.
 

Attachments

  • Image01-RGB.jpg
    Image01-RGB.jpg
    59.4 KB · Views: 84
  • Image01-RGB-C.jpg
    Image01-RGB-C.jpg
    57.6 KB · Views: 71
  • Image02-Lum.jpg
    Image02-Lum.jpg
    50 KB · Views: 63
  • Image02-Lum-C.jpg
    Image02-Lum-C.jpg
    48.3 KB · Views: 76
Many thanks for this new tool, Juan. Being able no reduce noise in the linear image is something really powerful.

I gave a try to one of my images and I was able to stretch it much more than before.
 
Juan,

I cannot find either transform in my downloads.  I'm currently using 1.7.04.759 for x86_64. I've uninstalled and re-installed twice.  In addition, ACDNR is still the first of the favorite processes. HDRWaveletTransform is gone and not in the compatibility menu either. What am I missing?

Regards,

Charlie
 
Hi

I am finaly have a go at noise reduction with this tool and find I have a problem removing small scale noise I.e I seem to get single or small groups of dark pixels

Can you help with a hint on how to help remove these  :D

Regards Harry
 
Hi folks,

this is really a huge feature that brings eye popping results! I just have a problem with setting the bias's and the Noise Reduction stuff in the different layers of the MMT tool..

Is there any way or hint that I could use or does it relay on trial and error?
I will try the same settings on a M51 of mine but I think that I will not be able to use the same settings for a nebula for instance?

Does somebody have a clue?

Thanks
 
Some self promotion cannot harm  ;)

You may be able to use the VaryParams script to test a set of values relatively quickly, see:
http://pixinsight.com/forum/index.php?topic=4108.0

With MMT it is a little bit tricky, as the parameters are in the form of a matix.  But is is easy to create an MMT process icon with some specific values to recognize them in the proposed parameter. In VaryParams you can cut and paste the parameter a couple of times (separated by comma), changing the values you want to understand. Dependeing on the result you may execute the process again, changing another parameter.

I tend to do this with a large range of values for some parameters first (on a reasonable preview), to understand the impact of the parameter on the result (it helps me to have a visual feedback) and the range where it may make sense.
With that understanding I can set better values and  fine tune them  quickly.

I must stress that you cannot just try values at random, it is neither feasible neither the philosophy of PI. The first pass is to help get a better visual understanding of the impact of the parameter and appropriate range. Wil the help of the documentation, you may compare the result with some other characteristc of the image (wavelet extraction, MTF...) to understand how the parameters relate to your image.

-- bitli
 
sreilly said:
...
Now what I've seen is that while the centers of the images are fine, and these are aligning and finding the proper correlating stars in each image, it appears that the RGB registered image is warped resulting in some of the outer star regions not aligning properly and giving a ghost effect. At least that what it seems to be doing sometimes but not always.

Now this is very interesting....

I am very new with PixInsight, but I have noticed this on a couple of runs I have done with my data. As the images had been taken through a semi-apo which didn't produced a great flat image, I assumed the effect was mainly due to the tool having problems with the elongated stars on the edges (being different exaggerations depending on the filter used).

I will go back & have another look to see if I can reproduce.

Phil
 
Back
Top