The original question had to do with eliminating a few "hot pixels" from the master flat. Here is a 5x5 array extracted from the master flat and converted back to ADU values, followed by CFA colors:
[ 337, 795, 381, 759, 340], R,G,R,G,R
[ 767, 593, 747, 555, 768], G,B,G,B,G
[ 337, 721, 12138, 779, 346], R,G,R,G,R
[ 754, 576, 737, 557, 713], G,B,G,B,G
[ 391, 747, 384, 741, 364]] R,G,R,G,R
Hopefully, the hot pixel is obvious. It is a red pixel, and no the same pixel isn't "hot" in the darks and is only "warm" (2084) in the bias frames.
There is no particular reason to flatten each channel separately from color sensitivity perspective. As you said, each color gets normalized so the absolute flat values are immaterial. What matters is of course the SNR which is why you want your flats to be white, each color with comparable signal levels and therefore SNR.
The only reason you would process each color separately while flattening is if your optical train has color dependent elements in it. A dust mote could, for example be more transparent in blue than red. I am not convinced this is a practical concern. I think that a single identical master flat frame for all 3 channels is sufficient. If you have an example that shows where channel dependent flat frames are a benefit I'd be interested to see it of course. I've read HAIP
Assuming a minimum value of 0 elsewhere in the array, then if you divide the above values by 12138 you will get "flat field normalization factors" for each pixel. Proceed to divide the original values by the flat field normalization values and the result will be an array with all values set to 12138. Yes it is flat, but the original relationship between the color channel lumonosity values is lost. Even if we "fix" the hot pixel, flattening will distort the color channel luminosity relationship.
As for the 'full stop difference' statement I think you may want to forget about terrestrial photography terms. They aren't used a whole lot for astro photography. A certain channel simply has x % higher ADUs. Much more exact. I'm also not convinced this says anything about relative sensitivity unless you are certain the source was true white light. In any event, DSLR sensors are generally optimized for green sensitivity. In addition you get twice the pixels so green is very well represented. Not very useful for astro photography but nice to terrestrial work.
You are correct about the stop statement. However, if you will look at the data above you will see that the green channel values are in fact about 2x the red values, and near 1.5x the blue values. This corresponds pretty closely to the "daylight" color balance factors in the original Canon CR2 file exif data, and also corresponds with quantum efficiency measurements of the 5DMkII's sensor published by Christian Buil at
http://www.astrosurf.com/buil/50d/test.htmIn any event, back to the original question. Is there a way to find and fix hot pixels such as the one illustrated above in PixInsight? I am only a couple of weeks into the learning curve at this point, and although I have searched/read a lot of threads on the forum and checked at least some of the tutorials, I haven't found an answer yet.
Thanks in advance,
Dan