Hi!
I'm glad you liked the article. It is short in text, but usually the introduction is the harder part to write on.
When applying the multiscale processing and creating the three images of varying scales is the original image used linear or non-linear? Or does it matter?
All this proccessing methodology is with non-linear data. The only point where I work with linear data is at the moment of building the HDR image. But, after this step, data is completely delinearized with an histogram transformation.
How did you create the 3 images of increasing scale size? I know you can create the large scale image with ATrousWavelets and then subtract it from the original to get a small scale image. You are showing 2 smaller scale images. How were those created? What are their scales?
The philosophy is the same as in the
Masked Unsharp Mask technique. Take two images:
Image1 --> We remove the first three layers (we have only structures starting from 8 pixel wide).
Image2 --> We remove the first seven layers (we have only structures starting from 128 pixel wide).
If we subtract Image2 from Image1, we will get an image with structures ranging from 8 to 128 pixel wide.
Of course, this is a classical band-pass filtering. With this technique, you will filter structures according to a size interval. BUT those structures will be affected from the local illumination levels: the structures will loss contrast in the more illuminated areas.
The solution to this problem is the
Local Contrast Normalization Function, which I will describe in further articles. It's a simple equation, applied through PixelMath.
What process/processes do you like to use to sharpen the small scale images?
Once you make the structures independent from the illumination levels, one of the best options to make the sharpening is CurvesTransformation. Yes, sound bizarre, but it is. You will see.
Finally, how do you recombine the processed scaled images with the original? Addition with Pixel Math? Perhaps you could show the formula/formulas you would have used for recombining these images.
I usually make a simple addition in PixelMath. You control the addition putting a weight to each image. So, if we have a small-scale (SS), a mid-scale (MS) and a large-scale (LS) images, the formula could be this one:
original + SS*j + MS*k + LS*l
Being j, k and l three different weights for each of the processed images.
In the case of NGC 6914, the difference between the before and after this multiway processing methodology is not great at first glance. But in some cases this can be very powerful. See this figure, extracted from my processing example on NGC7331:
The top-left image is the original one. The top-right is with HDRWT, showing the galaxy cores. But it doesn't show the faint IFNs. To make it visible you must further stretch the image (bottom-left); with this stretching, you lose again the galaxy cores. But if we make a multiway wavelet processing starting from the top-right image, we can have it all (at the bottom-right): we can show perfectly the galaxy cores and the IFNs at the same time.
An easier way would be to directly stretch the image until you see the IFN and then apply a HDRWT. But this will result in a much more "flatter" galaxy body.
With my methodology you are not getting results quickly. The goal of my methodology is to have everything under control. Starting from this basis, you can have the final result you really desire. To me, this way makes me feel the image in my hands, as I was making a sculpture.
Anyway, we will start generating large-scale images. I recommend you to start acquiring data of Barnard nebulae, these techniques are very funny with these kind of objects.
Best regards,
Vicent.