Hi Bernd,
Thank you for your detailed analysis and insights on this important topic, and sorry for being inattentive.
As you have pointed out, I described the purpose of the dark frame optimization threshold parameter in
a forum post, back in 2014. As noted in that post, the main problem here is contamination of master dark frames with readout noise. This noise is additive and normally distributed, and acts like poison for
our adaptive dark frame optimization algorithm. Basically, the problem is that the model this optimization algorithm is based on:
I = I0 + k*D + B
is no longer valid if the master dark frame contains a significant amount of additive noise. Recall that in the above equation, I is the uncalibrated science frame, I
0 is the dark-and-bias-subtracted frame, k >= 0 is the dark frame optimization factor, D is the master dark frame, and B is the master bias frame. Strictly, this model requires D to contain pure dark current signal exclusively. Since our algorithm uses noise evaluation techniques, when D contains additive noise the computed optimization factor k will be too low, and hence will tend to undercorrect for dark current.
The optimization threshold feature is a brute force, but efficient, solution to this limitation. It simply discards very low pixels in the master dark frame before computing the optimization factor k. This is based on the assumption that additive noise happens at relatively low values. By setting all master dark frame pixels below the optimization threshold to zero (zero pixels will be ignored by the algorithm when computing statistical moments and noise estimates), we can remove most of the additive noise component. By doing this we also remove part of the true dark current signal, but if the threshold is reasonably set, the surviving master dark frame pixels are enough to calculate a good optimization factor. In practice, this only fails when the master dark frame is of very poor quality, but when this happens, the entire calibration process is typically meaningless anyway.
Your analysis is very useful because it shows that the noise thresholding technique implemented by our optimization threshold feature is quite sensitive. Your final conclusion is very interesting:
the important criterion for the right choice of the OT must not be, to maximize the correlation coefficient k0, but instead to maximize the pixel count in the calibrated images.
It is clearly true that trying to maximize the optimization factor makes no sense. After all, we are designing an algorithm precisely to compute k0, so trying to force it to have a minimum or maximum value is conceptually wrong, at least without some suitable
a priori knowledge, which we obviously don't have here. Minimizing the number of zero pixels after calibration, as a way to "optimize the dark frame optimization" (sounds nice!
), had not occurred to me. This is something we'll have to think more thoroughly. Again, thank you for your elaborated review on this topic.