Hi Jack and Larry,
First I must say that I'm glad to see experimental works like this one here. This is our goal with PixInsight: experiment and explore new paths. This is where the real fun is
"orphan" chrominances and luminances
I mean that if you change the luminance of a RGB image, you must ensure that the new luminance provides enough support for the chrominance, and vice versa.
If your luminance doesn't give support to the existing chrominance, you'll get a dark and noisy image. If your chrominance doesn't support the new luminance, you'll get an unsaturated result.
This is why we included the luminance and saturation transfer functions in the LRGBCombination tool. By fine tuning them you can achieve an optimal mutual adaptation between luminance and chrominance.
In your schema you adapt the Ha to work as a new luminance for the RGB data *before* the LRGB combination process (step 2). This is a good idea for the reasons above.
I think one answer to the question "why this works" is that you have RGB data of exceptional quality. Your broadband red image seems to provide a rather good chrominance support to the Ha image that is working as luminance. If your red channels were poor (noisier, weaker), the obtained results could be disappointing.
Another obvious answer is that you are throwing away the implicit luminance of the RGB data, which is usually poor, and replacing it with a higher SNR luminance, the Ha image in this case. This of course improves the SNR of the result. This happens also when you do a regular LRGB combination, but a narrowband image as Ha has more signal and less noise than a broadband luminance.
A drawback: this works for emission nebulae, but not for OIII emission, or reflection. In these cases, the Ha image cannot work as a luminance because these objects aren't red at all. The fact that you are mixing Ha and L to form the resulting luminance partially solves this problem, but at the cost of reinserting part of the noise from the RGB data into the final (Ha+L)RGB image.
Finally, the stars are turning out well for the same reason: you are reinserting part of the RGB luminance in the result, which gives enough support to the stars.
I think this is an interesting approach. It's more likely to work well with high-quality RGB data, as is your case, and Ha emission objects. It's easy and can be a good alternative to the usual (Ha+R)RGB method. You should experiment with more images, including bad RGB data and different objects, to see what happens.