First I will explain why and how I took the images in the first place, then go on to the processing. This is a first for me in a few aspects:
1. First CCD image of the Moon.
2. First time using StarAlignment to align the Moon, no stars visible.
3. First time using Narrowband filters on the Moon.
Some time ago I tried to image the Moon with my scope and CCD camera (Takahashi FSQ-106ED and SX-Trius 814 mono), but it was way over exposed as the CCD camera is too sensitive even with a 1/1000th sec exposure. So I ordered a Neutral Density (ND) filter ( 0.8 )
Unfortunately, I guessed at the wrong density and images were still over exposed. So I tried imaging through both the ND and narrowband filters, Ha and NIR filters were over exposed but the OIII and SII were perfect.
So I made an image using those two filters with the colours mapped as follows:
Red = SII
Green = (SII *0.5) + (OIII *0.5)
Blue = OIII
Note: SII is in the red part of the spectrum and OIII is blue, so they do produce the correct colours. I took 15 subs of each filter, aligned and processed in PixInsight and got the attached image.
The workflow:Firstly, this is based on a post here by Ignacio, (
http://pixinsight.com/forum/index.php?topic=5908.15) without which I would not know where to start, so thank you Ignacio.
I am showing full size crops of a small part of the image as we go through the tutorial, so you see the changes at each stage.
As I was only experimenting, I did not take any flats, and next time I will, but I did have bias frames, so the first task was to calibrate the subs with the bias frames. This I did with ImageCalibration, but you can use the BatchPreprocessing script if that's what your used to.
Next all the subs need to be aligned. Now I am not saying the settings here are the right ones, I certainly have not experimented, as the first values I tried worked for me and are the same as Ignacio used. All values are at default except those those marked in yellow. Plus of course you need to select a Reference image, the images to be aligned and the output directory. In this example I am also using drizzle to get a final image larger than taken with the camera. As my camera field of view is 1.5 degrees across, the Moon (0.5 deg) is rather small in the frame. And I have to say that the increase in detail is amazing, drizzle really does work.
First StarAlignment, yes I know there are no stars, but it does work, you just need to make some changes to the settings as shown.
Each set of subs, (one for OIII and one for SII) need to be integrated (stacked). I used the same settings as I use for deep sky images as shown in the tutorial by Vicent Peris [http://www.pixinsight.com/tutorials/master-frames/index.html] Again for this example 'Generate drizzle data' is checked. The Sigma High and Low are set lower than usual for deep sky images.
As drizzle is being used, DrizzleIntegration is performed next.
After Integration we end up with two images, the small one being a standard integration and the larger one is the drizzled version. The image below shows both right after integration and for some reason they have a diferent brightness. Note this is before they have had a ScreenTransferFunction (STF) applied.
The images are now cropped with DynamicCrop to remove some of the black sky,using the same settings so they stay aligned.
So here is the starting point, the registered and integrated OIII image. Remember, these are crops of a small part of the image at full resolution.
I then performed Deconvolution on each integrated set. If not drizzled I used 0.5 using the Parametric PSF with no deringing and just one pass. But the drizzled image I used a StdDev of 1.5 and I also enabled Deringing which gave a better result, with Global dark at 0.05 and Global bright at 0.03. A second pass with deconvolution with the same settings except changing the StdDev to 0.9 this second pass enhanced the fine detail.
Here is the deconvolved image: (First Pass)
And the deconvolved image: (Second Pass)
I used LinearFit on the two images (OIII & SII) so that when they are combined to produce an RGB image the colour balance should be correct.
The conversion to RGB was done in PixelMath, a synthetic green channel is made from 50% of both OIII and SII.
We now have a colour image of the Moon, but at this stage you will hardly see any colour if at all. To ensure the colour balance is right, a preview of the black sky is made and BackgroundNeutralization is used. No further colour calibration should be needed.
The image is then made non-linear with HistogramTransformation to get the brightness as you want it. Further multiple curves transformations are done to adjust contrast and colour saturation. I deliberately did not over do the saturation, it's all too easy to go to far and produce an over saturated Moon that looks false.
I extracted the luminance and did further work on the L image to enhance the sharpening. UnsharpMask, HDRMultiscaleTransform, more CurvesTransformations for contrast and a final UnsharpMask.
Luminance before further processing:
Luminance after further processing, now sharper and more contrasty:
This Luminance was then added back to the RGB image using LRGBCombination
Here is the LRGB image:
We are nearly there, a few more fine adjustments with LocalHistogramEqualization, Curves and finally ACDNR to reduce noise, most of which is generated with all the previous operations, most of the noise reduction being applied to the chrominance.
Here is the final image at a reduced size:
Visit Astrobin for the full size image.
http://astrob.in/102018/0/I am learning all the time and I suspect the LRGB combination could have been done better, all I did was extract the L and add it back after processing, but I think at the time of extraction the RGB should also be extracted and later on, the separate L, R, G & B channels are combined.
I also missed out cosmetic correction before Integration!
Anyway, I hope this will help someone.
Mike