Dark Current Suppression and the PI image calibration process

georg.viehoever

Well-known member
Hi,

I recently came across this article http://www.clarkvision.com/articles//dark-current-suppression-technology/ by Roger N. Clark, who says that recent DSLR sensors use a technology called "dark current suppression" - in effect subtracting dark current in the sensor electronics, not computationally. It left me wondering what this means for the PI image calibration process for DSLRs that have dark current suppression:

1. Does dark frame adaption make any sense with this?
2. Maybe it is better to work without any darks at all (since darks also introduce noise), or use an artifical master dark frame with a constant value.

What do you think?

Georg
 
Hi Georg,

The only way is to first experiment. I cannot say you anything without trying with images coming from this kind of camera.

V.
 
Vicent,

if I read the article correctly, this should apply to any Canon DSLR after 2008... so maybe someone else on this forum has some experience with this.

Georg
 
Hi Georg,

Sorry, I didn't read the article, from your message I thought this was kind of new technology. I've been working with a lot of DSLR data sets from people coming to my workshops and it always worked fine with the usual settings. So I guess you don't need any special settings. Anyway, I will experiment with some of those data sets as soon as possible.

Thanks,
Vicent.
 
You can probably get away with no dark frames at all if your images are properly dithered and stacked with outlier rejection.  But as Vincent said, the best way is to make a test and see whether it really works.  I did this a while ago:
http://www.asiaa.sinica.edu.tw/~whwang/gallery/random_notes/dark_tests/index.html
 
You will find that Roger is a very controversial figure online.  Canon & Sony (the famous star eater algorithm) implement noise suppression technology and you have the option of turning on effectively single shot dark frame subtraction at least on newer Canon.  In both of these cases you will either remove features or increase shot noise. In Roger's comparison image what you are really looking at is a mix of amp glow and fixed pattern noise that has definitely improved.  I can also tell you having used a T3 and SL1 in hot conditions for Astrophotography, that they noticeably  benefit from dark frame subtraction especially during summer.

There are alternative methods, for example
https://www.photonics.com/Article.aspx?AID=44298 

I actually use the hot pixel map method myself BUT I'm also using a cooled Sony CCD at -25C so YMMV
 
I read the article and the most scholarly paper to which it links, here: http://ericfossum.com/Publications/Papers/2014%20JEDS%20Review%20of%20the%20PPD.pdf

I have to say that I consider Clark's conclusions to be largely erroneous.  It appears to be based on a fundamental misconception of the dark current suppression issue that affects Canon DSLRs.

This issue was identified by Craig Stark, documented here:  http://www.stark-labs.com/craig/resources/Articles-&-Reviews/CanonLinearity.pdf

I have reproduced his results through my own tests documented here: http://www.blackwaterskies.co.uk/2015/02/pixinsight-dslr-workflow-part-2a-dark.html

To summarise, Clark's proposal is that dark current suppression technology is a relatively new introduction to DSLR sensors and that it is responsible for the improvements in (apparent) reductions in dark current and the accompanying dark current noise that has occurred between approximately 2005 and 2014.  Clark dismisses the idea that dark-current suppression in these cameras is implemented in firmware/software.

Clark reaches these conclusions by reference to largely visual comparisons of dark frames from Canon 1DMkII and 7DMkII cameras as well as some fairly dubious techniques of histogram stretching in Photoshop rather than by sound statistical methods.

My view:

- The pinned photo-diode (PPD) that Clark claims to be the magic technology here has been in production use in DSLR sensors since at least 1995.  No doubt incremental improvements have been made over time, but looking at the literature the implementation doesn't appear to have dramatically changed between 2005 and 2014.  This technology does indeed reduce dark current and dark current noise in the sensor elements, but nowhere is it claimed that it eliminates these issues entirely except (by implication from his overall article) by Clark.

- Clark contradicts his own assertions that dark current suppression by software/firmware is not used and is not necessary in modern DSLRs.  He refers to the ability of Canon cameras to take a second exposure of equal length with the shutter closed and subtract it from the light frame on-camera to reduce dark current, clearly acknowledging that dark current and accompanying dark current noise still exist, and that software suppression techniques are available as an option.

- The tests performed by Stark (which anyone can reproduce) show that (for the Canon cameras tested) apparent dark current reduces with increasing temperature and/or exposure time (contrary to what would be expected), but that dark current noise increases.  PPD technology reduces generation of dark current and thus the dark current noise that accompanies it; it does not "subtract" the dark current in any way.  The only reasonable conclusion is that remaining dark current (after accounting for the PPD improvements) was accumulated but has later been suppressed since the accompanying dark current noise remains in the image.

- Clark claims that it would be impossible to suppress the dark current in firmware/software without having a large library of calibration frames stored on camera for different exposure lengths and temperatures.  This is arrant nonsense, as an optical black area (a set of masked off pixels at the edge of the sensor) can be used to measure the accumulated dark current for each light frame.  It might be that a single factory-created "master dark" is stored in the camera firmware and scaled using the optical dark measurements , or it might more simply be an offset calculated from the optical dark that is subtracted globally from each pixel.

I don't think anyone is disputing that the quality of sensors and the accompanying camera technology has improved over the years, leading to cleaner images with less noise, but to pin it solely on a 20+ year-old piece of technology and dismiss any suggestion that manufacturers use firmware to clean up images is not credible.

What does this mean for DSLR users trying to calibrate their light frames?  I reached the same conclusion as Vincent:  Do not assume anything and perform your own tests for your particular model of camera.  The firmware will vary between manufacturers and between models over time, so there is no hard and fast "rule" that can be applied to all DSLRs.

My rules of thumb are that after performing your own tests:

- If your CMOS camera behaves normally (i.e. dark current and dark current noise increase proportionately with time/temperature) then creating and subtracting a master dark frame will work.

- If your CMOS camera suffers from repeatable banding, amp glow or fixed pattern noise, then you should turn off PI's dark frame optimisation and use a time and temperature matched master dark.  Otherwise it will likely over or under-correct and those artifacts will only be partially corrected (or indeed made more visible).  This is certainly the case for my CMOS-based ZWO ASI1600MM-cool.  A temperature and time-matched master dark without optimisation works very well and deals with the amp-glow that becomes visible and increases after about 60 seconds of exposure.  With optimisation turned on, PI under-corrects by applying a scaling factor of about 0.45 despite a matched dark frame.  This is unsurprising since the optimisation process is global and I expect it is being thrown by the significant variation across the frame.

- If your CMOS camera suffers from non-repeatable banding or similar issues (as my Canon 500D does), then all bets are off.  Dark frame subtraction is likely to be problematic and just as likely to make your images worse rather than better.  Personally I found that lots of subs, large dithering scales (15 pixels plus between frames), subtraction of a master bias made from 300+ subs to deal with repeatable fixed pattern noise and judicious use of CosmeticCorrection and the CanonBanding script worked for me.
 
IanL said:
- If your CMOS camera suffers from repeatable banding, amp glow or fixed pattern noise, then you should turn off PI's dark frame optimisation and use a time and temperature matched master dark.  Otherwise it will likely over or under-correct and those artifacts will only be partially corrected (or indeed made more visible).  This is certainly the case for my CMOS-based ZWO ASI1600MM-cool.  A temperature and time-matched master dark without optimisation works very well and deals with the amp-glow that becomes visible and increases after about 60 seconds of exposure.  With optimisation turned on, PI under-corrects by applying a scaling factor of about 0.45 despite a matched dark frame.  This is unsurprising since the optimisation process is global and I expect it is being thrown by the significant variation across the frame.

Hi,

We're designing an amp glow correction right now, so in the future you'll be able to optimize the amp glow subtraction.

Best regards,
Vicent.
 
vicent_peris said:
We're designing an amp glow correction right now, so in the future you'll be able to optimize the amp glow subtraction.

Thanks Vincent, looking forward to hearing more about it.
 
Hi IanL,

I knew Craig Stark's review before and agree completely with your judgement of Roger N. Clark's article. You summarized it right when you say:

IanL said:
- The tests performed by Stark (which anyone can reproduce) show that (for the Canon cameras tested) apparent dark current reduces with increasing temperature and/or exposure time (contrary to what would be expected), but that dark current noise increases.  PPD technology reduces generation of dark current and thus the dark current noise that accompanies it; it does not "subtract" the dark current in any way.  The only reasonable conclusion is that remaining dark current (after accounting for the PPD improvements) was accumulated but has later been suppressed since the accompanying dark current noise remains in the image.

- Clark claims that it would be impossible to suppress the dark current in firmware/software without having a large library of calibration frames stored on camera for different exposure lengths and temperatures.  This is arrant nonsense, as an optical black area (a set of masked off pixels at the edge of the sensor) can be used to measure the accumulated dark current for each light frame.  It might be that a single factory-created "master dark" is stored in the camera firmware and scaled using the optical dark measurements , or it might more simply be an offset calculated from the optical dark that is subtracted globally from each pixel.

I think I've found evidence that strongly supports this last possibility: I looked at the "Overscan" region (optical black area) of long exposed (ISO 800, 6 min) darkframes from my Canon EOS 600D. For this camera, the normal image region is width=5202 and height=3465 pixels. The Overscan area is 51 pixels above and 142 pixels left of the normal image region. I only inspected the left optical black area and discovered, that sometimes there are darkframes with a bright artifact in that area, which I take to be a cosmic. When this artifact is strong enough (this is a rare event), dark banding occurs over the whole width of the image (Overscan region + image region). In y-direction this band was a few pixels wider than the artifact. Here is a screendump of an example clipping (b/w CFA raw image). Left half of the image showing the overscan region, right half showing a small part of the image region:

http://www.arcor.de/palb/album_popup_big.jsp?albumID=37074886&pos=12&interval=0&width=1920&height=1200
(Hope this link works, image directly didn't.)

I think this banding is the result of an in-camera preprocessing, a subtraction (by the line) of values calculated from the optical black area like you wrote above. Presumably this is done in a kind of weighting of neighbor lines, leading to the observed broadening of the banding campared with the size of the artifact.

I don't know whether this kind of preprocessing is also performed in other camera models and agree with Vicent and with your "rules of thumb": you must try.

Bernd
 
bulrichl said:
I think this banding is the result of an in-camera preprocessing, a subtraction (by the line) of values calculated from the optical black area like you wrote above. Presumably this is done in a kind of weighting of neighbor lines, leading to the observed broadening of the banding campared with the size of the artifact.

That's an interesting discovery and does seem to supportive of the idea that the optical black is used to adjust the whole row.  I'd be slightly cautious however, as I have noticed in light frames that banding can sometimes occur in the background of (whole or partial) rows adjacent to bright objects in the field of view.  I don't know the reason for this but it might be that the two effects are related.
 
bulrichl said:
I think I've found evidence that strongly supports this last possibility: I looked at the "Overscan" region (optical black area) of long exposed (ISO 800, 6 min) darkframes from my Canon EOS 600D. For this camera, the normal image region is width=5202 and height=3465 pixels. The Overscan area is 51 pixels above and 142 pixels left of the normal image region. I only inspected the left optical black area and discovered, that sometimes there are darkframes with a bright artifact in that area, which I take to be a cosmic. When this artifact is strong enough (this is a rare event), dark banding occurs over the whole width of the image (Overscan region + image region). In y-direction this band was a few pixels wider than the artifact. Here is a screendump of an example clipping (b/w CFA raw image). Left half of the image showing the overscan region, right half showing a small part of the image region:

This is just wonderful. Please could you share your data set? I should investigate this.

Please let me point out that this finding has been possible because you can read the overscan areas of DSLR images in PixInsight. Let's hope you discovered something important. :)

Best regards,
Vicent.
 
Perhaps a stupid question. Is there any reason to suppose that the in-camera dark current handling of a light image, may, in any way differ from a dark image of the same exposure time.
 
astropixel said:
Perhaps a stupid question. Is there any reason to suppose that the in-camera dark current handling of a light image, may, in any way differ from a dark image of the same exposure time.

It may work as well as calibrating with a single dark.  It certainly won't work as well as calibrating with a master dark integrated from multiple darks, which will inject less noise.
 
IanL said:
bulrichl said:
I think this banding is the result of an in-camera preprocessing, a subtraction (by the line) of values calculated from the optical black area like you wrote above. Presumably this is done in a kind of weighting of neighbor lines, leading to the observed broadening of the banding campared with the size of the artifact.
That's an interesting discovery and does seem to supportive of the idea that the optical black is used to adjust the whole row.  I'd be slightly cautious however, as I have noticed in light frames that banding can sometimes occur in the background of (whole or partial) rows adjacent to bright objects in the field of view.  I don't know the reason for this but it might be that the two effects are related.

Thank you for calling my attention to this observation of you and for pointing out to be cautious with the conclusions.

Thereupon I checked 40 darkframes for such bright artifacts in the picture area. In about 20 of these darkframes I found artifacts that (to my experience) would have been strong enough to cause banding if they were situated in the Overscan area. However, there was no case where an artifact in the picture area led to a this sort of dark banding.

Bernd
 
vicent_peris said:
bulrichl said:
I think I've found evidence that strongly supports this last possibility: I looked at the "Overscan" region (optical black area) of long exposed (ISO 800, 6 min) darkframes from my Canon EOS 600D. For this camera, the normal image region is width=5202 and height=3465 pixels. The Overscan area is 51 pixels above and 142 pixels left of the normal image region. I only inspected the left optical black area and discovered, that sometimes there are darkframes with a bright artifact in that area, which I take to be a cosmic. When this artifact is strong enough (this is a rare event), dark banding occurs over the whole width of the image (Overscan region + image region). In y-direction this band was a few pixels wider than the artifact. Here is a screendump of an example clipping (b/w CFA raw image). Left half of the image showing the overscan region, right half showing a small part of the image region:

This is just wonderful. Please could you share your data set? I should investigate this.

Please let me point out that this finding has been possible because you can read the overscan areas of DSLR images in PixInsight. Let's hope you discovered something important. :)

Best regards,
Vicent.

I will gladly share my data but don't have a website of my own. On the album site of my email service provider only pictures in JPG or TIF format are allowed. So where should I upload the data? Do you want CR2 files (20 - 25 MBytes each) or XISF files (36 MBytes each)? Please send me a personal message to clarify.

Reading the Overscan area is made possible by dcraw which PI depends on.  ;)

However, please don't expect to much.
The left Overscan area in the 600D is 142 pixels wide. The picture area has a width of 5202 pixels. So the relation of the corresponding areas is 142 / 5202 = 2.7 %. This implies that the probability of a cosmic-ray hit in the Overscan area is about 37 times lower than in the picture area. Only strong artifacts (high intensity and extension over a few pixels are required) give rise to this sort of dark banding. Furthermore the width of this banding is quite low: about 6 - 8 pixels more than the extension of the artifact in y-direction.

For these reasons I believe that only a small portion of the Canon Banding is caused by bright artifacts and in-camera preprocessing. There must be other causes as well.

Bernd
 
astropixel said:
Perhaps a stupid question. Is there any reason to suppose that the in-camera dark current handling of a light image, may, in any way differ from a dark image of the same exposure time.

I don't think so. Canon consumer cameras are made for daylight photography. Who takes darkframes? Only the small community of astrophotographers. Nothing that Canon is supposed to care about.

Bernd
 
RickS said:
astropixel said:
Perhaps a stupid question. Is there any reason to suppose that the in-camera dark current handling of a light image, may, in any way differ from a dark image of the same exposure time.

It may work as well as calibrating with a single dark.  It certainly won't work as well as calibrating with a master dark integrated from multiple darks, which will inject less noise.

What is the connection? I didn't intend to use Overscan data for calibration like it can be done with scientific CCD cameras. I was just interested to understand what happens during in-camera preprocessing.

Bernd
 
I don't have the technical expertise to comment on Clark's description of dark current suppression technology but it is certainly true that the better CMOS sensors have much lower dark current than previous generations.  The Canon 7D mkII certainly seems to be much better than previous Canon cameras.  However Sony Exmor sensors have had this low level of dark current for a long time and so Canon is very late to the game.  Exmor sensors are found in Nikon, Sony and Pentax cameras, though this list may not be comprehensive.

I use the Sony A7S (at ambient temperatures from -5C to 20C) but I still find a combination of dithering, dark frame subtraction and sigma rejection is important to eliminate all trace of thermal pattern and so-called walking noise.

One annoying feature of many Exmor sensors is the "edge glow" which typically appears along the bottom edge of an image and in a couple of patches on the left hand edge.  This edge glow appears in bias frames (even cooled to -20C), dark frames and light frames.  There was a discussion here: https://www.dpreview.com/forums/thread/4091328 I have yet to see a satisfactory explanation of the effect but it is not dark current. In any case, the edge glow in the master bias will remove the edge glow in the master dark and so PixInsight dark frame optimisation works well - I use it all the time.

Mark
 
Some more fuel to the edge glow issue.  See this:
10838027_1078985955466232_6822408858823220309_o.jpg


In the image, the top one is a long exposure dark taken with the Bulb mode, the bottom one is a short exposure bias taken with the M mode.  Both were taken with the same Pentax 645z camera.  The bias has edge glow and the dark doesn't.  Until I found this, my 645z images cannot be perfectly calibrated using PixInsight.  After I found this, I took all exposures in Bulb mode, including flat and bias, and the calibration problem is gone.

So, if this is the same edge glow you saw, then somehow Pentax figured out a way to turn it off in Bulb mode.  It could be simply ampglow, as I saw some indication that its strength changes with exposure time (when taken with different exposure times in M mode).

Cheers,
Wei-Hao
 
Back
Top