Ok, here I am. Sorry for the late post, but I've been very busy setting up a new Linux box that will allow us to create the new 64-bit and 32-bit versions of PixInsight.
Harry Page has kindly uploaded a raw image of M3 to our anonymous ftp server. I have used this image to create a basic processing example, which I'll describe below. I stress the word *basic* here: don't expect a spectacular result or a sophisticated procedure, but just something basic and simple that you can use as a starting point to process your images. I think this is what was intended with this thread.
Here is the final processed image at 1024 pixels wide:
http://pixinsight.com/tmp/m3-hpage/totalstack_pi.1024.jpg======================================================================
1. Open raw image and apply a Screen Transfer Function (STF)http://pixinsight.com/tmp/m3-hpage/01.jpgAs you probably know already, a STF allows us to apply a nonlinear stretch to the screen rendition of an image in PixInsight. A STF doesn't modify the target image in any way, but just its representation on the screen. STFs are extremely useful because they allow us to inspect and evaluate raw linear images. As you know, a deep-sky linear image acquired with a CCD or CMOS sensor usually concentrates most of the data in a narrow peak near the left end of the histogram (shadows). For this reason, these images are very dark, almost black, when they are represented on nonlinear display media adapted to the human vision, such us a computer monitor for example.
The ScreenTransferFunction interface is available on the Process Explorer window, under the Transfer Curves category. It is also included in the default Favorites category.
Indeed the image has strong light pollution gradients, and also shows some vignetting. This can be readily seen on the screenshot above.
2. Initial Crophttp://pixinsight.com/tmp/m3-hpage/02.jpgI decided to crop the image for two reasons. First, the main object is not centered in the frame, and as far as I know, there are no other interesting objects in it (please correct me otherwise). Second, I prefer to crop out the corners where the vignetting is severe.
The DynamicCrop process is under the Geometry category.
3. First try with DynamicbackgroundExtraction (DBE)http://pixinsight.com/tmp/m3-hpage/03.jpgThe DBE process allows us to generate
synthetic background models. Such models are accurate representations of the sky background as it is represented on the image, and can have many applications. In our case, we are going to use a background model to correct for sky gradients and vignetting.
As you see on the screenshot, I used nearly default DBE parameters. I just had to decrease the
Minimum sample weight value to obtain automatically generated samples over the darkest corners in the image. Then I generated 16 samples per row by clicking the Generate button.
4. Inspecting DBE sampleshttp://pixinsight.com/tmp/m3-hpage/04.jpgThe new DBE interface in PixInsight Standard is much better than the old version that was included in PixInsight LE. One of the most important improvements is its ability to reject small bright objects in DBE samples. You can verify how this works on the screenshot. As you see, I selected a sample that falls over some stars. The square graphic on the DBE interface represents the image pixels in the currently selected DBE sample. Black pixels on the graphic correspond to image pixels that are being completely rejected, and hence not considered as part of the background. White pixels in the graphic are pure background pixels, and gray pixels represent image pixels as a function of their statistical relevance in terms of their belonging to the background.
5. Evaluating the generated background modelhttp://pixinsight.com/tmp/m3-hpage/05.jpgThis screenshot shows the generated background model. It isn't a good model for an obvious reason: it has a bright central region, which reveals that the model has included too many pixels from cluster stars. If we'd use this model to correct for uneven illumination, we'd overcorrect the central regions.
So let's restart from scratch
6. Controlling the Tolerance DBE parameterhttp://pixinsight.com/tmp/m3-hpage/06.jpgThe
Tolerance DBE parameter controls how restrictive is the pixel rejection algorithm. A lower tolerance means that more pixels will be rejected and considered as not belonging to the background. As you see in the screenshot, I lowered Tolerance to 0.25. Other than that, I just generated 16 samples per row automatically, as before.
7. DBE, Second Roundhttp://pixinsight.com/tmp/m3-hpage/07.jpgNow we have a much better background model. Note that it doesn't show a central prominence as in step 5. It probably could be further improved by decreasing tolerance a bit more.
8. Subtracting the background modelhttp://pixinsight.com/tmp/m3-hpage/08.jpgThe PixelMath interface can be used to operate with images in many different and sophisticated ways. In our example, we'll use it just to subtract the background model from the original image. As you can see, this is very easy: just write the corresponding expression (formula) using the identifiers of the involved images.
In the expression, $target means "the image to which I'll apply PixelMath". It could be replaced with "totalstack" and the result would be the same in this case.
Why subtract? Because sky gradients due to light pollution are additive effects, i.e. they are caused by light that is being summed to the image during the exposure time. Vignetting, on the other hand, is a multiplicative effect: it filters the light in a constant way (for a given pixel) before it reaches the sensor during the whole exposure. So to correct for vignetting, we should divide the image by the background model, just as a standard flat fielding operation. We actually have a mixture of additive and multiplicative uneven illumination effects in this example. In these cases, it is customary to subtract the background, which usually gives sufficiently accurate results (mainly because sky gradients are usually much stronger than vignetting).
9. Background subtractedhttp://pixinsight.com/tmp/m3-hpage/09.jpgThis is what we get after subtracting the background model to the original image. The result is not satisfactory. The reason is that there are some uneven illumination artifacts, as dark blotches, distributed over the whole image. This is likely due to a less-than-perfect image calibration; specifically, to bad flat field frames. In addition, the upper right corner still shows some vignetting. So we haven't finished our work with DBE: another background modeling/correction step is necessary.
10. Second DBE iterationhttp://pixinsight.com/tmp/m3-hpage/10.jpgHere you have the second DBE parameter set that I used. This time the Tolerance value is much more restrictive: 0.1. This is because after the first DBE iteration, now we have a much more uniform background, so we have to correct for relatively slight irregularities in the background.
I generated much more samples this time: 24 samples per row. This is necessary to model well the relatively small dark artifacts that are to be fixed.
Note also that I manually edited a few samples (deleted some, moved others) near the cluster. This is to prevent contributions from stars to the new background model.
11. Second DBE background modelhttp://pixinsight.com/tmp/m3-hpage/11.jpgThis is the second background model. Quite good, I'd say 8)
12. Background subtracted, second iterationhttp://pixinsight.com/tmp/m3-hpage/12.jpgNow we have a very uniform illumination over the whole image. Mission achieved!
13. Conversion to 32-bit floating point formathttp://pixinsight.com/tmp/m3-hpage/13.jpgThis step is not strictly required in this example -since we are doing quite simple things, 16 bits would suffice. In general, however, working with a 32-bit format is highly recommended.
The 32-bit floating point data format will allow us to apply some aggressive processing transformations without problems due to accumulated roundoff errors. For more complex processing works, PixInsight fully supports the integer 32-bit format, which provides a huge working space of 2^32 discrete sample values. For even more critical processing tasks (mostly high dynamic range transformations), the 64-bit floating point format (10^15 discrete values) is also available.
Note that the image, at this point, is still linear. Remember that so far we have been working with an active Screen Transfer Function (STF). In the screenshot above, the STF has been reset, and for this reason the image appears very dark on the screen.
14. Nonlinear stretch with HistogramTransformhttp://pixinsight.com/tmp/m3-hpage/14.jpgThe screenshot speaks for itself. As always, this step is crucial and must be done very carefully -what the final image will be depends largely on how this initial stretch is performed. Remember these two rules:
- Do not clip a single pixel at the highlights. Never. No excuses.
- The sky is not black. Don't pretend that.
15. HDR Wavelet Transformhttp://pixinsight.com/tmp/m3-hpage/15.jpgThe HDRWaveletTransform process (Wavelets category) is an ideal tool to work with globular clusters. This is because the core of a globular cluster, as happens with the bright cores of most galaxies for example, poses a
high dynamic range problem.
Definitely, forget other primitive approaches (yes, I refer to DDP) and take advantage of cutting-edge algorithms and implementations in PixInsight. The screenshot above shows what I'm talking about.
The HDR wavelet transform algorithm doesn't work well with linear data. This is why we have to apply it after the initial nonlinear stretch.
16. Building an ACDNR maskhttp://pixinsight.com/tmp/m3-hpage/16.jpgTime to go for some noise reduction. ACDNR has an integrated mask generation feature that works very well with the Real Time Preview interface.
Remember that masks protect where they are black. We want to protect the stars in the cluster's core and it's halo, while we want full noise reduction over the background, where the signal-to-noise ratio is very low.
17. Defining ACDNR parameters for the luminancehttp://pixinsight.com/tmp/m3-hpage/17.jpgACDNR performs separate noise reductions for the luminance and chrominance components of the image. The most efficient method for fine tuning of ACDNR parameters is disabling the chrominance part (select the Chrominance tab page and uncheck the Apply option) to work with one or more previews in Luminance display mode.
Edge protection parameters must be accurately adjusted to avoid damaging significant image structures. These are the most critical ACDNR parameters. Experience and practice are required to achieve the best results; noise reduction is always a delicate task and each image is a completely different problem.
18. Use previewshttp://pixinsight.com/tmp/m3-hpage/18.jpgAlways use previews defined on image regions of particular interest. They will save you a lot of time and will allow you to achieve the best possible results.
19. Defining ACDNR parameters for the chrominancehttp://pixinsight.com/tmp/m3-hpage/19.jpgChrominance ACDNR parameters usually are less critical than their luminance counterparts. This is because most of the detail in the image is perceived through the luminance. However, applying too aggressive noise reduction to the chrominance will lead to destruction of small-scale structures with high chrominance contents. Small stars are good examples.
To work with chrominance ACDNR parameters, you select the Luminance tab page and uncheck the Apply option, then check the corresponding option on the Chrominance page. The "CIE a*=R b*=G" display mode is very useful to work with the chrominance. In this mode, the CIE a* component is represented as red on the screen, and the CIE b* is represented as green. The advantage of this display mode is that you can see the chrominance of the image completely isolated from its luminance. Another option, although less appropriate for noise reduction, is the "L*=0.5" display mode, in which the chrominance is shown in true color.
20. Chrominance noise reduction in "CIE a*=R b*=G" display modehttp://pixinsight.com/tmp/m3-hpage/20.jpgAnother screenshot, mainly for before/after comparison.
21. ACDNR resultshttp://pixinsight.com/tmp/m3-hpage/21.jpgFinally, noise reduction must be tested on some previews for both luminance and chrominance (in the normal RGB display mode, of course) before applying the process to the image. I think the obtained result is quite good in this example, especially considering the high amount of noise that we have had to deal with.
22. Additional stretch after noise reductionhttp://pixinsight.com/tmp/m3-hpage/22.jpgAfter noise reduction, the histogram of the image is no longer dominated by the noise. This has a very nice consequence: there is an unused part of the histogram at the shadows, which can be clipped to improve the overall contrast.
In the screenshow, note that I clipped just 52 pixels in the image (a 0.0015%). The amount of clipped pixels must always be watched carefully on the HistogramTransform interface.
23. Large scale noise reductionhttp://pixinsight.com/tmp/m3-hpage/23.jpgIf desired (not actually necessary in this example), a second instance of ACDNR can be applied for noise reduction at larger scales. Note the very protective mask and edge protection values, combined with a large standard deviation of the ACDNR filter.
======================================================================
Final considerationsOne problem with this image is that it is very weak in the reds. For this reason we don't see red stars in the cluster, which is a pity because they are an important ingredient of globular cluster images. For this reason also, I haven't applied a color saturation boost, which normally would be a standard step in a processing work like this one. By increasing color saturation for the reds in this case, a big amount of noise is generated without further benefits. I don't know the cause for this lack in the reds, but I suspect that the origin of this problem is in the preprocessing.
As you'll see in all the screenshots that I've included in this example, I saved all processes as process icons on PixInsight's workspace. These icons can be saved as a .psm file for later use. Process icons are very important in PixInsight, and one of its strongest points. Here is the .psm file that I saved for this example:
http://pixinsight.com/tmp/m3-hpage/m3_hpage.psmTo load this file, right-click on the workspace and select Load Process Icons from the context menu.
The image is very nice and, once correctly processed, it is much better than many globular cluster images that we can see posted regularly on forums, even many images acquired with superior instruments. As you can see, the fact that you live in a light polluted area must not stop you from imaging the deep sky.
Thanks to Harry Page for letting us work with his raw data.
Time permitting, I'll be glad to give a try to more raw data, if you think I can help.