Hi Luc,
First of all, you have discovered a bug in the ChannelCombination process: the HSI space is not working correctly. Add one more to your personal account :wink:
The problem is that very saturated colors become almost unsaturated when an image is recombined from its HSI components. I am still not sure if this problem is in PCL code (as I suspect) or in ChannelCombination. This bug has existed unadvertedly during more than one year, probably because HSI is seldomly used. I'll fix it as quickly as possible.
The rest of color spaces (RGB, CIE XYZ, CIE Lab, CIE Lch) and the HSV ordering system are all working flawlessly in both ChannelExtraction and ChannelCombination.
Come on now with your technique for LRGB combination. You are dividing each pixel sample of your color image by the sum R+G+B. As long as you have no black (or insignificant) pixel in your image --which would lead to a division by zero--, there is no problem with color balance in doing this, since you are dividing each channel by the same number.
Since you are using a DSLR, I assume that you are starting from RGB raw data instead of the separate R,G,B,L data set used in the typical CCD workflow.
As you are planning your processing (if I understand it well), your also have a luminance image and a RGB image, both of them linear, just as in the CCD workflow. However, your luminance is synthetic, that is, you have derived it from your raw RGB data. Then you want to process your luminance applying a nonlinear stretch, wavelets, etc. Finally, you want to recombine your processed luminance with the color data. The problem here is that you are combining a nonlinear luminance with a nonlinear chrominance, but the applied nonlinear functions are different (very different, probably).
My advice is to apply a nonlinear transform to the RGB data as a whole, instead of the divide operation that you are using. Basically, since you are starting from a RGB image, the best way to have both luminance and chrominance well balanced (which is essential) is to apply the same initial nonlinear stretch to the raw RGB image with HistogramTransform. See our last tutorial about ACDNR; there you have an example with a DSLR M45 image that has been processed from scratch.
Most tools in PixInsight allow you to process luminance and chrominance separately. For example, if you want to apply wavelets to the luminance, for structural enhancement purposes, then you have no need to extract the luminance and process it apart, since ATrousWaveletTransform does this automatically for you.
While processing a synthetic luminance separately can be a good strategy sometimes, if you start with raw RGB data, as in the DSLR and one-shot CCD worlds, you should always apply the same nonlinear transform to the linear raw data as a whole. This guarantees that you'll have luminance and chrominance synchronized in terms of the overall brightness/contrast ratio or, in other words, that you'll have enough chrominance to sustain the luminance.
Hope this clarifies. I'll publish a note here when the HSI bug gets fixed.