Hi all,
I have been alerted about
a thread on Cloudy Nights forums, where some users focus on a recurrent topic:
Star colors are destroyed by PixInsight (and also by other applications). This statement, of course, is not true, and since we have read the same opinions (with variants) too many times, I have decided to make a singular exception in this case to comment on opinions about our software posted on other public forums.
One of the interveners in the same CN thread has posted a link to a TIFF image, where the 'color destruction' problem is supposed to be verifiable. I hope he won't mind if I download the image to perform a few tests in PixInsight.
Here is the original 16-bit integer TIFF image open in PixInsight:
The image has a strong color cast, which can be identified as the misaligned histogram peaks on the figure above. This cast (actually, a different additive pedestal applied to each channel) can be removed very easily by clicking the AutoZero Shadows button on the HistogramTransformation tool:
Assuming that the intent is to get neutral gray stars in the upper row (from the 1:1:1 color ratios written on the image), the next thing to note in this image is that the highlights are also unbalanced, since the stars in the top row have a strong green cast:
This can be fixed very easily with PixelMath or ColorCalibration. However, to keep this example as simple as possible, and since its purpose is just to demonstrate how star colors can be preserved easily and efficiently, I'll continue without altering the original white point of the image.
I'll implement two simplified approaches to stretch this image without damaging color in the highlights. The first approach involves the
MaskedStretch tool and a relatively strong application of ColorSaturation:
As you can see, especially if you look at the full-size screenshot, not only color has not been damaged, but no pixel has been saturated on the brightest stars after the stretch. Obviously, some saturated pixels on the cores of the brightest stars in the original image remain saturated.
The second approach uses a poorly-known image analysis tool:
AdaptiveStretch. This tool computes an optimal brightness/contrast transformation based on existing pixel-to-pixel relations and an initial noise estimate:
After AdaptiveStretch I have applied a moderately strong curve to the CIE c* component, to increase color saturation. The result is more noisy (this can be controlled easily with the
noise threshold parameter of AdaptiveStretch), but shows a different way to face this test problem, from a more theoretical perspective that allows us to understand the relationships between signal, noise and color.
In real-world images, combinations of the HistogramTransformation and MaskedStretch tools, along with suitable protection masks generated mainly with the HistogramTransformation, CurvesTransformation, StarMask and RangeSelection tools, provide us with powerful ways to control image stretching, noise and color saturation in PixInsight. This is real image processing: identify and understand the problems first, then design appropriate strategies to solve them efficiently.
As for other "methods" referred to on the CN thread, I'll make no additional comments. As you can imagine, I have zero interest in the promotion of practices based on the lack of knowledge about fundamental image processing topics.