Hi Andy,
On opening in PixInsight, I use the rescale function to use the whole dynamic range.
I guess this is the problem, as Jack has also pointed out. If you rescale individual RGB channels separately, then the three channels will no longer be referred to the same numerical range of coordinates, and hence any resemblance to the original color balance will be purely coincidental.
Think of a RGB color image, not as three unrelated grayscale images put together, but as a single image where each pixel is a vector in a 3-D space (the RGB space in this case). The three components of each vector must be referred to the same system of coordinates for the image to make sense as a whole.
So the correct procedure would be something like this:
1. Open the individual channel images produced by CCDStack. The 32-bit integer format is an excellent choice, since in this way you avoid all the problems related to undefined numerical ranges in floating point FITS images.
2. We assume that the channel images are linear at this point. Combine the R, G and B images into a single linear RGB color image with ChannelCombination. If you have to stretch the color data, use this combined image, but don't apply different nonlinear functions to each component. In general, there is no reason to clip the highlights of the histogram, but if you do so,
do it equally for the three RGB components.
3. If you have a separate luminance image, which is also linear, you can process it before performing a LRGB combination. Keep in mind that most image restoration techniques should be applied to linear images. Deconvolution and RestorationFilter, in particular,
must always be applied to linear images. ATrousWaveletTransform, DBE, and many noise reduction algorithms, also work a lot better when the image is linear. The ScreenTransferFunction (STF) tool will help you to see the image without needing to stretch it.
4. If you have a separate luminance, you can use the LRGBCombination tool to combine L with the combined RGB image from step 2. However, LRGBCombination requires stretched (hence nonlinear) RGB and L images. This is because this tool works in the CIE L*a*b* space, which is a strongly nonlinear (human vision adapted) color space.
If you want to preserve the linearity of the color and luminance data into a single combined image, an alternative to LRGCombination is as follows:
- Open the RGBWorkingSpace tool and define the value of Gamma equal to one. You must disable the "Use sRGB Gamma Function" option to change Gamma. The luminance coefficients are not critical here, but for DS images a good choice is usually setting all of them to one, so each component has the same weight to calculate luminance. Apply this instance of RGBWorkingSpace to both the linear RGB image and the linear luminance image.
- Extract the X and Z components of the CIE XYZ space from the linear RGB image with the ChannelExtraction tool. Do not extract the Y component.
- Combine X and Z with the linear luminance, using it as the Y component of CIE XYZ, with ChannelCombination.
In this way you have a linear color image where the linear luminance has replaced the original Y component of the RGB data. Since the CIE XYZ space is linear and the working RGB space uses a linear Gamma function, the linearity of the data has not been altered at all. Now you have a linear color image that is a YRGB combination. Cool, isn't it? 8)
The basic ideas are:
- Don't stretch individual RGB components separately because this will destroy any existing color balance.
- If you work with previously calibrated RGB components -as is the case with CCDStack-, achieving good color balance is usually a matter of background neutralization. As Jack has said, doing this by adjusting the histograms (by setting different white points, or applying different midtones balance values), is in general a bad idea.
- Try to preserve the linearity of the data, as much as possible. Linear images are much easier to handle, and there are algorithms that cannot work properly with nonlinear images.
- Use ScreenTransferFunction to work with linear images comfortably.
- Perform a nonlinear stretch (HistogramTransformation, CurvesTransformation) only when you know that you no longer need linearity. Usually, this happens at the final stages of the entire processing work.
I hope this will help you. Let us know if you have more doubts, or if you disagree with anything.