Hi all,
I have been thinking about this whole subject of 'channel mixing' as well.
In my mind, I visualise a 'grand channel mixer'. This would allow a user to select from a bunch of open images (no real need to work from files I don't think). There would be four 'channels' that will be combined by the LRGB Process, as it stands. No need to re-invent THAT particular wheel. The LRGB Combine process is a 'superset' of the RGB Combine process anyway, so if a user does not want to include anything in the Lu channel, that is fine and simple anyway.
For each of the four 'input channels', the user would then pick appropriate images for inclusion in the blend. And, for each image allocated to a channel, the 'percentage' of inclusion could be adjusted by a slider.
So, by way of example, consider seven source images that a user wishes to 'blend'. Let us assume that they have somehow been obtained, and are now available as:-
Lu
Rd
Gn
Bu
Ha
S3
O2
The user wishes to do a four-channel blend, and decides to try the following
Lu = 50% Lu + 50% Ha, with a final contribution of 30% to the LRGB blend
Rd = 50% Rd + 50% Ha
Gn = 30% Gn + 70% S3
Bu = 30% Bu + 30% Ha + 40% O2
The appropriate images would be allocated to each 'master channel', and the percentage sliders adjusted accordingly
The final combine would reduce the overall Lu combination to 30%, and the LRGB process would be invoked - producing an output image for the user to assess.
However, further options, at the 'channel blend' stage, might allow the user to maintain the 'shape' of the channel histogram, by referencing the channel to a selected image's histogram. That way, the individual Minimums, Medians, Maximums etc, for each 'blend image' in a channel could be adapted such that the overall histogram SHAPE remains 'similar' to the 'shape' of the referenced image - a form of 'normalising' if you like - applied on a channel by channel basis. I see that this would help to keep colour 'balance' under control, even though colour 'content' could be changing significantly (but I may be wrong!!)
I used this approach recently when I multiplied my pre-processed M1 Crab Ha channel with my pre-processed M1 Crab Rd channel (which had been extracted from an image that I had 'worked on' in RGB space, starting from a standard Rd+Gn+Bu dataset). This meant that I was adding back in a modified Rd component that wasn't, overall, any 'redder' than the original extracted channel. It had 'more detail' (due to the included Ha data), and the detail was available in a different 'spatial position' on the image, but the Median position and general 'width' of the Histo curve were, more or less, 'in the same position' as for the original Rd only channel.
Now, I may be completely wrong in this approach - I haven't had enough raw images to work with - but certainly, to me, it 'seems' to be a valid process path to follow.
Does anybody out there have better experience? Carlos? You guys must have to consider this problem in exquisite detail when you are working with your 'professional' data.
Cheers,