Hi Jose,
OK, let me try and understand your problem - without using the red/green 'limiting' scenario, as that WOULD be foolish.
So, you set out to take Flats for your OSC, but your light source is NOT 'chromatically neutral'. Let us assume that you do actually have a PERFECTLY FLAT source of light, and that your optical train suffers from just exactly ONE 'dust donut', and that the effect of that 'donut' is to reduce the incident light 'underneath it' by a 'perfect' 25%.
Let us assume that the fact that your light source is not 'neutral gray' gives you a CCD response (for an RGBG CFA) where Red responds 60%, Green responds 40% and Blue responds 30% (these figures do NOT need to add up to 100%, by the way!).
Now, you adjust the intensity of your light source to get the 'maximum ADU' of around 32,000 (1/2 full well, and thus 'inside' the linear response zone for your CCD) - and you do this whilst also keeping exposure times long enough to be able to take effective FlatDarks at the same exposure time as well (so, typically a few seconds, or so)
However, the max ADU' reading will have been for a Red pixel - because the 'colour' of your incident light was more Red (60:40:30, remember) than Green or Blue. This doesn't actually matter - as I hope I am going to be able to explain.
Think about your light source causing Red CFA pixels to read 32,000, Green CFA pixels to read 40/60ths of that (say 21,300) and your Blue CFA pixels to read 30/60ths of that (16,000) - all of these for where the dust donut was NOT having any effect. What you do NOT need to worry about is the fact that the CCD may actually be TWICE as sensitive in the Green part of the spectrum. It doesn't matter - this extra sensitivity will have ocurred irrespective of whether the photon arrived from M82 or that cheap flashlight bulb you are using in your lightbox.
Now, where the 'magic donut' HAS had an effect, the ADU intensities for Rd, Gn and Bu will have been attenuated (by 25%, as I postulated above). So the 'obscured' ADU values for Rd, Gn and Bu respectively then become 24,000, 16,000 and 12,000 (-ish).
And, remember, nothing has been deBayered at this stage - nor will anything be deBayered until the very end.
What now needs to happen (we are assuming NO effects from 'Dark' noise - we don't need to confuse things here) is that the 'Flat' that we are talking about will be 'divided into' each of your lights. However, this Flat is not really in an effective state for being used as a denominator in a division process - especially not in the world of PI, where all internal operations are performed using data in the [0.0, 1.0] range. So, we need to convert the six ADU values that exist in our image into the 'internal' range.
The 'un-obscured' values change from [32000, 21300, 16000] to [0.500, 0.325, 0.250]
The 'obscured' values change from [24000, 16000, 12000] to [0.366, 0.250, 0.183]
Now, let us pick a random pixel intensity for some pixel location on one of our Light frames - let's make it 0.45678 (i.e about 29935 in 'normal money')
If that pixel was under a Rd filter in the CFA, and was un-obscured, then the 'division' process would change the value to 0.91356 (i.e. 0.45678 / 0.500)
Had it been under an un-obscured Gn filter, the result of the Flat division would have been 1.40548 (0.45678 / 0.325). And un-obscured Bu would have resulted in 1.82712 (0.45678 / 0.250)
In summary, this would have produced these three results 0.91356, 1.40548 and 1.82712 for Rd, Gn and Bu respectively
For the 'obscured case' the three results would have been 1.24803, 1.82712 and 2.49607 for Rd, Gn and Bu respectively
(ignore the fact that we have now left the [0.0, 1.0] range - PI would have kep this under control on our behalf)
What can be seen is that the ratios between these six results have NOT been altered (other than by my 'rounding errors')
MOST IMPORTANTLY, what can also bee seen is that, because the MasterFlat is LACKING 'blue spectral light' the result of the Flat division will be heavily biased TOWARDS Blue (in the 'inverse' of the original 60:40:30 ratio, i.e. in a (100/60):(100/40):(100:30) ratio, which will also emphasise GREEN over RED as well.
But, the reality of this just isn't really a problem. Although your calibrated RAW lights now have a colour cast that is the INVERSE of the light spectrum that was used to illuminate the Flats - you are in any case about to deBayer these RAW images - and the deBayer process itself imparts its OWN 'colour cast' as it removes the CFA.
So, you end up with a final image, after aligning and integrating, that just looks 'horrible' as far as colour is concerned. Sure, if you knew that you would be able to keep the (horrible) spectral content of your flat light-source 'constant', then you could spend hours imaging one of my 'test-cards' and tweaking the deBayer 'colour matrix' to try and recover a close approximation to 'natural' colour. And then you would just always invoke that array whenever you deBayered your data after flat-fielding.
Alternatively, you could just use PixInsight to sort things out - by specifying an area of the background of your image and using BackgroundNeutralisation to 'neutralise' the colour casts to make that area 'grey' - effectively then also removing the nasty colour casts from the remainder of the image as well. And, obviously, you can go another step forward by also telling PI to use ColourCorrection to make the 'average' of a selected foreground area 'white' as well.
I know this probably has not been the 'clearest explanation' - but for a light source that has, more or less, 'some' ouput covering 'most' of the spectrum, my experience has been that it just is not worth worrying about too much. The 'inverse colour cast' that you see where spectral output was weakest is at least a LINEAR CONSTANT throughout all subsequent processing, and can be dealt with once you start working with your final image.
As I have said elsewhere - if you think deBayering an OSC image from a RGBG (or RGGB) CCD is difficult, try and imagine the exponential extra layer of difficulty you encounter with a CMYG colour-filter-array as found in cameras such as the popular DSI-IC and DSI-IIC imagers. There you have a 'mixture' of primary and secondary (additive and subtractive) filter colours - so all bets are off!
But, my sub-5$ lightbox (relying on a 'dim' single 12V torch-bulb filament, shining through several layers of el-cheapo printer paper) which must have the most horrible spectral response imaginable, still allows me to create a perfectly acceptable colour image from my CMYG one-shot colour imager.
What I firmly believe is that you CANNOT destroy the CFA data by 'boxcar filtering' your MasterFlat, and then trying to use that to calibrate your still-RAW Lights.
HTH
(and apologies to everyone who got totally lost, or bored, or fell asleep, during this ramble)
Cheers,