(i) Uncertainties in both the light frame and the master dark frame, as well as the signal in the light frame, remain constant during the whole dark scaling process. Since they remain constant, we can simply ignore all of them them during the whole process.
Here, I am assuming that any image, i, can be considered to be a combination of signal, s, and uncertainty, u (or 'noise').
This allows us to write
i = s + u
Because, at this early stage of the calibration process, we have NOT yet stacked our Lights, then we are only talking about a SINGLE (individual, raw) Light frame, and therefore our task, or aim, is to eliminate the u component such that we have
i = s
which is great, assuming that we can establish some representation of the 'noise uncertainty', u
Now, at this point we can 'ignore' the whole issue of Flat frames - they can actually just be considered as a 'special instance' of Lights - the sole purpose of which is to identify optical anomalies in a given imaging train. In other words, any process we choose to apply to our Lights (using, perhaps, Darks and Biases) can be generically applied to our Flats.
At this point, however, I do not understand the "Since they remain constant, we can simply ignore all of them " statement
(ii) Dark current, as recorded in the master dark frame, consists exclusively of small-scale structures. By small-scale structures, we refer to pixel-to-pixel variations. We use a wavelet transform to isolate these variations in the first wavelet layer, as image structures at the 1-pixel scale
This statement is representative of the fact that 'dark noise' (the dark current that creates 'uncertainty' in our desired image, i) has an effect on a PIXEL-by-PIXEL basis. So, a pixel that may be highly susceptible to dark current (e.g. a 'hot' pixel) has no specific ability to infuence the behaviour of its nearest neighbours.
This is why investigations using a 1-pixel size Wavelet approach allows local variations to be assessed.
(iii) A necessary precondition is that the uncertainty in the measurement of the dark current is negligible in the master dark frame
Presumably this situation is created by acquiring sufficient individual Dark frames, and then integrating these in a statistically robust manner to generate a MasterDark where the u component has been minimised.
It would seem therefore that, although the overall concept being proposed within ImageCalibration allows a MasterDark to be more 'widely applicable' (i.e. it is no longer quite so important to match Lights and Darks in both temperature and exposure time), there is a fundamental requirement to have at least taken sufficient individual Dark frames to ensure that the MasterDark is a 'good' one.
What is not clear here is whether it is better to create a MasterDark based on a set of LONG exposures, or whether short exposures would create the same end result. What is also not specifically stated, although I am assuming that it is implied (for statistical 'robustness'), is that the series of individual Darks themselves MUST be temperature and time correlated. Can someone clarify this?
(iv) Another necessary precondition is that the light frame and the master dark frame are correlated. This means that both frames share the same dark signal, although scaled by a factor k > 0. Our algorithm tries to find the value of k
I am confused by this statement almost straight away. Initially I assumed that this referred to the notion of 'temperature correlation', whereby all calibration fames that are to be used with each other ARE correlated in terms of temperature. If this is NOT the case - as is 'hinted at' towards the end of Juan's reply - then why bother with closed-loop TEC cooling of CCDs in the first place?
OK - let me throw these thoughts out for discussion, before I try and break down the remainder of Juan's reply in my mind.
Cheers,