Hi all,
Further to my questions on BiasOffset frames in the BugReports section here on the Forum, let me try and esplain what I was trying to achieve.
My reasoning was that, because BiasOffset frames are based on extremely (infinitely) short exposures, they ONLY contain the 'bias' or 'offset' level that has been imposed by the readout electronics in the camera hardware.
There is little or no thermal noise, because the exposure is too short to collect enough thermal photons
There is little or no 'photon' information, for the same reason, and for the fact that they should have the lens covered anyway
There is no 'flat frame' calibration to worry about - because there is no optical path involved
There is no 'real' chance of cosmic ray strikes, again because of the extremely short exposures (and, in any case, because it is so easy to collect large quantities of these exposures, subsequent statistical analysis should be more than robust enough to easily eliminate such rare possibilities)
So, all that is really acquired are ADU values corresponding to the offset voltage applied by the digitising electronics in the camera.
However, I would expect that the BiasOffset frames would also contain information regarding 'faulty' pixels within the CCD. Specifically, the following pixel faults should be present irrespective of the length of exposure, or the temperature of the CCD :-
DEAD PIXELS - pixels that always present as an ADU value of '0'
STUCK PIXELS - pixels that always present as a 'maximum' ADU value. say 65535 for a 16-bit A/D converter
FIXED PIXELS - pixels that always present as a 'fixed' ADU value, neither 'zero' nor 'max', and also not anything close to the Bias ADU level
My idea was to take my BiasOffset frames and ImageIntegrate, using Average Combine, with NO NORMALISATION (anywhere), with NO NOISE WEIGHTING, and with fine-tuned WinsorizedSigmaClipping.
I have already tried this for a random set of subs, about 60 images, varying in temperature from 4C to 6C, all taken at 0.0001s using my DSI-IIPro. I tweaked the Winsorization Sigma values to eliminate the same number of pixels for High and Low (special note here Juan - I would like to be able to set a 'target' number, or percentage, of High or Low clipped pixels, and have ImageIntegration adjust Sigma to achieve this level of clipping, but I'll present this in more detail as a new thread in the Wish List section).
At this point I end up with an image that seems to have an almost perfect Gaussian Distribution curve in the Histogram window. So, I reckon that I have been successful so far.
However, my argument now is that this 'integrated' image has pixels that are both above and below the Mean (or Median) ADU value - which is to be expected, naturally. But, at the very 'edges' of the distribution curve are ADU values that are present NOT because of the normal Gaussian distribution, but are present because they are one of the 'faulty' types of pixel, described above.
Now I wanted to be able to 'clip' the image data again - to extract those pixels at the 'edges' of the distribution curve. The pseudo-code I chose (easily implementable as a single statement in PixelMath) is as follows:-
// $T is the 'target image'
upper_sigma = 3
lower_sigma = 4
image_mean = mean($T)
image_stdev = sdev($T)
lower_limit = image_mean - (lower_sigma * image_stdev)
upper_limit = image_mean + (upper_sigma * image_stdev)
if ( adu_value($T,x,y) < lower_limit ) then adu_value($T,x,y) = 0
if ( adu_value($T,x,y) > upper_limit ) then adu_value($T,x,y) = 1
if ( adu_value($T,x,y) >= lower_limit AND adu_value($T,x,y) <= upper_limit ) then adu_value($T,x,y) = image_mean
The new image will therefore contain only three possible ADU values:
0.000 for pixels below the lower clipping point, corresponding to DEAD PIXELS
1.000 for pixels above the upper clipping point, corresponding to STUCK PIXELS
'mean value' for all other pixels (FIXED pixels being ignored, or identified at STUCK pixels by tweaking the upper_sigma multiplier
In reality, all we need to do is identify EITHER of these faulty pixels, because the next step would be identical anyway, i.e. to use some form of 'nearest neighbour' interpolation to recreate the faulty data. In fact, I envisage a simple 'Faulty Pixel' mask being created, directly, from a (modified) version of the above pseudo-code, where all 'good' pixels are set to zero (i.e. masked-out) and all faulty pixels are set to 1.000 (i.e. un-masked).
Then a typical 'blur' process could be applied (as a batch process using the ImageContainer) to the CalibratedLights, through the 'fault mask', just prior to the calibrated lights being StarAligned. Obviously, after StarAlignment, the spatial registration is lost, and the cosmetic correction can no longer be applied.
Am I missing something?
Is there an easier way?
Does anybody care
Cheers,