ImageCalibration and negative values (KAF 8300)

bitli

Well-known member
I have made extensive tests with bias and darks on a KAF-8300.  I may make the results available as time permit.
The bottom line is that because the chip has a very low dark current compared to its read noise, subtracting the bias from the dark often results in negative values for many pixels (at least for darks of less than 10 min at -30C). These values are truncated to zero and make the calibrated dark pretty bizarre.
There multiple technical solutions when doing a calibrated dark, like adding pedestal, not using the dark or the bias, and so on. I need to investigate more to see the impact of each solution (more by curiosity than by need, dithering seems to be solving most of the problems anyhow and the dark is not critical for short exposures).

Now the question:
Even when doing the ImageCalibration of lights with 'calibrate' (so dark/bias/flat done in one operation), it seems that the ImageCalibration truncates the intermediate result 'dark - bias'. Is this right ? Or did I not interpret the results correctly ?
If ImageCalibration does the truncation, should it not do the truncation only on the calculated end image ? Because with the light added, the result is always positive, so there would be no truncation in the process. Or should it log a message like '1234567 pixels of the dark truncated to zero' to attract attention to that specific problem?
thanks for any info
-- bitli

 
Salut Jean-Marc

Yes, I know this issue, it happens to me too, but it depends of the calibration method. Can be worse with hazzardous dark optimisation coefficient (>1.0). With 50 darks, I have much less problems than only 10 or 20 darks (in theses cases, I got "no correlation between..." )
Anyway, it is now better with my KAF16803 than my "ex" KAI4022 or ICX694

A message with truncated pixels to 0 should be nice BUT why PI could not work in floating point using negative values ? Should it become a problem ? Maybe yes with some algorithms. :-[

Maybe Juan will answer

Philippe

 
There multiple technical solutions when doing a calibrated dark, like adding pedestal ...

An additive pedestal common to all calibrated frames is the only valid solution to this problem.

it seems that the ImageCalibration truncates the intermediate result 'dark - bias'. Is this right ?

Not actually. Truncation to the [0,1] range is carried out as the very last step of the calibration task for each frame, i.e. after overscan, bias, dark, flat and pedestal correction.

Or should it log a message like '1234567 pixels of the dark truncated to zero' to attract attention to that specific problem?

This seems reasonable. I'll try to add this warning in the next version of the ImageCalibration module.

why PI could not work in floating point using negative values ? Should it become a problem ?

Without truncation, the calibration process would be inaccurate and hence useless. For image calibration to work, the whole data set, including all bias, dark, flat, science and calibrated frames, must be referred to a common numeric range. This is what we call a coherent data set, where all of the data share a common physical significance. When a negative value arises after subtraction of a bias frame, it should be considered as an outlier, or an invalid observation. It falls outside the common reference range and hence must be truncated to the range's lower bound (i.e. zero). The alternative to truncation is rescaling. Now imagine that we rescale each calibrated frame to make room for its negative values. Then the resulting calibrated data would be incoherent, without a common reference range, and hence meaningless.

When truncated zero pixels become a problem because of too high bias values, the solution is adding a constant pedestal to all calibrated frames. The constant pedestal is added after calibration, just before  truncation, and will be removed automatically when the corresponding master frame is applied in a subsequent calibration (pedestal values are stored as standard PEDESTAL FITS keywords). Typically, a small pedestal between 50 and 100 DN is sufficient.
 
it is now better with my KAF16803
I guess this is an alternative solution, ca you send me one? ;)

Philippe,
Having more dark helps a lot indeed. Also taking darks warmer and longer... But I try to find a solution where you have little controls on the number of darks (you receive the darks, you reprocess an old image, you do not have the time or access to make 50 darks...).  Or at least that we are aware of the problem. Considering the number of users having the 'no correlation between...' this seems to be a common situation.

Juan, thank you very much for your clear explanation.  I though of the negative numbers as working more or less as an 'automatic pedestal', but using a real one is fine. Any kind of warning for this specific problems would be useful.  I need to think a little bit more on your explanations, make some test and may come back with more questions.

Have a clear sky ! (here, bias and darks are the only images I can take with some chance of success....)
-- bitli
 
Jean-Marc

Calibrate without dark optimisation ? Coefficient will be 1.0 and you will not have the "no correlation..." but maybe more noise on each calibrated frames.

In fact, there is 2 methods

1) the full one : Master BIAS, calibrate all DARK with MasterBias, Make MasterDARK,  Calibrate all FLAT with BIAS or DARK, make masterFLAT, then calibrate all image. This is the best option to get a "no correlation..."

In this case :
2) MasterBIAS, MasterDARK (with raw dark directly), calibrate FLAT with MBias, Make masterFLAT, Calibrate all image. This works better in case of.

If not working, disable DARK OPTIMISATION
 
Juan Conejero said:
Without truncation, the calibration process would be inaccurate and hence useless. For image calibration to work, the whole data set, including all bias, dark, flat, science and calibrated frames, must be referred to a common numeric range. This is what we call a coherent data set, where all of the data share a common physical significance. When a negative value arises after subtraction of a bias frame, it should be considered as an outlier, or an invalid observation. It falls outside the common reference range and hence must be truncated to the range's lower bound (i.e. zero). The alternative to truncation is rescaling. Now imagine that we rescale each calibrated frame to make room for its negative values. Then the resulting calibrated data would be incoherent, without a common reference range, and hence meaningless.

When truncated zero pixels become a problem because of too high bias values, the solution is adding a constant pedestal to all calibrated frames. The constant pedestal is added after calibration, just before  truncation, and will be removed automatically when the corresponding master frame is applied in a subsequent calibration (pedestal values are stored as standard PEDESTAL FITS keywords). Typically, a small pedestal between 50 and 100 DN is sufficient.

Yes, this seems logic  8)
 
Hi,
Thanks Philippe, I know the methods.  But especially method 1 just does not work without a PEDESTAL in the case I am testing. Unfortunately I did not bother looking for the PEDESTAL functionality in PI this week-end, as I though it was related to APN (I confused it with another software).  There is a doc here http://pixinsight.com/forum/index.php?topic=2182.0, and the tooltip is quite clear.

My understanding of what happens is:
  • The read noise of the 8300 (about 23 ADU) is much larger than the dark current for a a few minutes at -30C (a few ADUs).
  • Therefore many pixels of the darks are below the level of their corresponding bias pixel, due to the statistical dispersion caused by the read noise (even assuming a good bias with less than 1 ADU dispersion).
  • A straight subtraction of the bias from the dark without PEDESTAL result in many values at zero.
It works better if the darks are first integrated without calibration, as there is a lower dispersion of the pixels. So in this case method 2 is better. But there is still too many pixels below zero, at least for the number of darks I have access to (and you really need a lot to be below a couple of ADU).

Maybe there is something wrong in my measures / calculation, but I would need to understand what.  Applying another receipt blindly does not satisfies me (besides I can just take more darks when possible, but this is not what I am looking for).

(EDITED March 2)
Looking at multiple independent set of darks/bias, the explanation of only the statistical deviation seems insufficient. Almost all the BIAS have the median value of a small center area slightly higher (10 to 15 ADU) than the median of the same area of most of the darks.  So there may also be something happening in the driver (like an overshoot of an automatic overscan compensation or some settings I did not see).  There is some interesting study on the subject in
http://www.iceinspace.com.au/forum/showthread.php?t=112592
although it applies to a different manufacturer. I have contacted the manufacturer of the the camera (Moravian Instruments) to see if they have some explanations and I'll keep you informed.
-- bitli

 
Working with the manufacturer (Moravian, very helpful) and doing additional tests I found other causes of calibration problems. I share them with you in case this may be useful to others:

Bias level: The camera /Moravian G2 8300) does not do any correction in firmware or driver (some other brand do, and all APN do). The analog path (reading and converting the signal level) is calibrated so that it results a level of about 500 for zero.  The analog part is not cooled at the same temperature than the CCD and, considering analog component tolerance,  the zero may be slightly different depending on the temperature of these components. Usually you expect the zero to be higher with temperature and exposition, but it may be about the same or even slightly lower. Because the noise level is very low,  it is easy to have pixels below the bias level in short darks.
The recommended practice by the manufacturer is to use only the darks, taken at the same temperature and duration.  I leave it to the reader to find if they can also use bias and pedestal with some benefits in this case (read answer from Juan higher in this thread).

A cause of 'No correlation...' error:
An independent cause of error was much more mundane. I took some darks in my garage and I had to use another software than the one used for imaging on the telescope. The darks was vertically mirrored compared with the images....  This is easy to see if you blink an image with a dark and look at the hot pixels. I do not dare to add a check for reversed dark in the wish list  ;)

The bottom line is that following blindly recommendations like 'take 400 bias and 200 darks and it will be fine' do not (always) work.  In case of problem you have to understand the original cause(s).
-- bitli
 
Hi Jean-Marc


There is always a good reason the process cannot work...  >:D
The most important (and also the most difficult) is to find this reason.


Glad to see the problem is known and you have to work around to find a solution under Pix.
Maybe I would declare the 1rst line of my images as extra dummies (overscan) to equalize bias & darks
 
Back
Top