Hi Nikolay,
No, I cannot see how a Defect Map - in a general case - could also be made to cater for an image that has not yet been de-Bayered.
Consider a single-pixel defect in an OSC array, take a Red pixel, for example, which is behaving like a 'Dead' pixel. If you were to 'synthesise' the 'Rd' ADU value for this pixel from 'nearest neighbours', but tried to do this PRIOR to deBayering, the nearest 'Rd' neighbours would not be found 'one pixel away' (because these would ALL be either Gn or Bu pixels), but would be AT LEAST 'two pixels away' (almost 3 pixels in the case of the 'corner neighbours').
I would have thought that this would lead to a very 'poor' interpolation for the missing data. That said, synthesis of the Gn or Bu data (in the case of this example pixel location) would not be 'too bad' - after all, even if the pixel was NOT DEAD, there still would not be any Gn or Bu data at that site anyway - because it is under a 'Rd' filter in the first place - so it should (theoretically) behave as a 'dead' pixel for Gn and Bu anyway.
However, once you have deBayered the monochrome image, then EVERY pixel location DOES now have a one-pixel-distant 'nearest neighbour'. This means that a 'pixel defect' can now find data that is as close as one pixel away - even if that data itself has ALREADY been synthesised from one-pixel-distant 'nearest neighbours' during the deBayer stage.
At the end of the day, working with OSC images will ALWAYS be a compromise. There are arguments for and against deBayering at the 'start' of the calibration process, versus the 'end' of the process.
Personally, I feel that standard 'Calibration' (Darks, Flats, Biases, etc.) should be applied to the Bayered data first. Then the images should be deBayered, then they should be Aligned, then they should be Integrated. Only then would I consider applying a DefectMap correction.
However, I have also now been thinking - ever since the new power of StarAlignment, ImageCalibration and ImageIntegration became available in PI, and considering that all of the data can be processed at 32-bit level (minimum) - what is then 'wrong' with carrying out all of these pre-processing steps FIRST, arriving at a 'MasterLight' image that STILL contains reference to the CFA location.
If all of the images are eventually aligned to an image that was itself 'an original image' - then the aligned images must, surely, have retained their 'CFA location' data - haven't they? Or am I missing something.
If it can be said that they HAVE retained their 'knowledge' of the CFA, this would mean that the MasterLight must ALSO have retained the precise physical relationship to the CFA. And this means that the deBayer process now needs to be applied ONCE, and ONCE ONLY to the entire data set - i.e. we would no longer need recourse to a 'BATCH DEBAYER' routine.
To me, this MUST be an advantage - after all, EVERY application of the deBayer algorithm is introducing 'some noise' - where 'noise' is the ACTUAL RESULT of the 'guessing game' that takes place for every pixel that requires interpolation. Interpolation is 'guesswork' - no matter how sophisticated that 'guesswork' might be.
Now, the counter-argument against the 'single-pass' debayer application, is that it is 'more difficult' to align the non-debayered images in the first place - because the CFA 'grid' causes all sorts of 'nasties' during the alignment process. In which case I see two options:
Either,
The images are deBayered and StarAligned first - and the 'results' of the StarAlignment process are 'memorised' for subsequent re-application to the non-deBayered images
or,
The StarAlignment process can call a deBayer algorithm internally, in order to be able to process each image in turn, before finally applying the transformation to the RAW data
Of necessity, I believe, in both of these cases, it would be useful for PixInsight to be capable of storing a known deBayer matrix for a user's application.
And this then begs the question, "OK, so I can (relatively) easily define the physical relationship of the CFA to the incoming image, but what about the scaling parameters needed in order to be able to try and define some sort of suitable colour balance?"
Well, based on experience with the far more troublesome CMYG ColourFilterArray present in the likes of the DSI-IIC imagers (and others), it just isn't really that necessary - not when dealing with astronomical images in PixInsight - where we also have DynamicBackgroundExtraction (DBE), BackgroundNeutralisation (BN) and ColourCorrection (CC) to help us.
What are other people's thoughts?
Cheers,