I always enjoy your style
Thanks! That's funny, as I wasn't aware that I have an style when I write in English. Nice to know!
So...is it fair to say that for "quality" seeking types..The whole L/binned RBG is WRONG...?
In my humble opinion, yes. Two important facts must be taken into account with respect to LRGB:
- Each time you throw more luminance to your image, it loses chromatic contents. As a result, your chrominance will contain more noise and less signal (more uncertainty), you'll have more difficulties to achieve good color saturation, etc. This forces you to acquire more chrominance to provide support for the excess of luminance.
- OK, so we need more RGB data to compensate for the overabundance of L. By acquiring binned RGB, we can get the required chrominance in less integration time, due to the increase in sensitivity (each 2x2 binned superpixel provides a 4:1 increase in SNR). However, this is achieved at the expense of reduced spatial resolution. Note also that binning reduces readout noise, but not dark current noise. It is true that our vision system detects most of the image detail through the luminance (that's the only
reason why the LRGB trick can be useful), but it is also true that small-scale luminance structures without good chrominance support tend to be desaturated (rendered as grayscale).
In short, LRGB can be a good idea to save time because it allows you to acquire the whole chrominance with increased SNR. However, it is not true that a LRGB image is better than the equivalent (in terms of acquired signal) RGB image.
What is clearly an error, in my opinion, is to acquire unbinned LRGB data. By doing so, you obviously cannot save time. And if you save time, then you have acquired an excess of luminance, and hence your image lacks chrominance.
Of course, this is only my opinion.
THEN I went back and tried to follow the tutorial;
I apologize for the confusion. I wrote that tutorial in 2006, if I can remember well. Many important things have changed since then. In the new website (which I am finishing now), that tutorial will be tagged as obsolete, if not completely removed. Today I would process the same data in a completely different way —in a way that is much more respectful with the data, and also much more efficient.
So please use that tutorial just to understand the practical usage of several tools, but don't follow its general "style".
What am I using for "L"..??
If you acquire LRGB data, then you already have a separate L image. You can process it as an independent grayscale image, as explained in the tutorial. Note also that in PixInsight you can process the luminance of a RGB image (in this case, after performing the LRGB combination) independently of the chrominance, without needing to have both components as individual images. For example, the ATrousWaveletTransform tool provides several options to process luminance only, chrominance only, or luminance+chrominance. Other tools have "To Luminance" check boxes that can be used to restrict processing to the luminance.
If you acquire RGB data, then your luminance is synthetic. You can either extract it as an independent image (with the ChannelExtraction process, for example), or process it using "To Luminance" options, as above.
In both the RGB and LRGB cases, by not extracting the luminance you get an important bonus: you can apply a process to the luminance and see immediately its true effect on the whole RGB image. The price of this is a small lack in performance: each time you apply a process the luminance must be synthesized, processed, then reinserted in the RGB image. But the benefits clearly surpass the extra computational work.
A synthetic luminance (RGB) has more implications. One of them is that you no longer have to concern yourself about good adaptation between luminance and chrominance (which is a serious challenge with LRGB images): the adaptation is perfect by nature.
Another important implication is that extra care must be taken to ensure that a linear luminance will always be synthesized while the RGB image is still linear. This is relevant, for example, if you process data acquired as RGB with tools such as Deconvolution, ATrousWaveletTransform or UnsharpMask:
- A linear RGB working space (RGBWS) must always be used to process linear RGB images. A linear RGBWS has a value of gamma equal to one. The RGBWorkingSpace tool can be used to set a linear RGBWS.
- The Y component of the CIE XYZ space must be used as the linear luminance, instead of the L* component of CIE L*a*b*, as usual. This is because Y is a linear function of RGB when gamma=1, while L* is always nonlinear. On Deconvolution, you must check both the "Luminance" and "Linear" check boxes. The same is true for ATrousWaveletTransform, where you must select the "Luminance, linear" target mode.
Getting a wee migraine here... its starting to sound not straightforward at all.....
Come on, it may be not very straightforward (and it isn't, I think), but isn't it funny?