Author Topic: ImageIntegration and master dark  (Read 17595 times)

Offline Jack Harvey

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 975
    • PegasusAstronomy.com & Starshadows.com
Re: ImageIntegration and master dark
« Reply #15 on: 2009 September 06 09:12:28 »
Also do not overlook readout noise which is added with each subframe and with 50 could be a considerable contributor to the noise of the final master cal frame.
Jack Harvey, PTeam Member
Team Leader, SSRO/PROMPT Imaging Team, CTIO

Offline Simon Hicks

  • PixInsight Old Hand
  • ****
  • Posts: 333
Re: ImageIntegration and master dark
« Reply #16 on: 2009 September 06 12:31:43 »
Surely the readout noise contribution to the final master cal frame becomes less the more subframes you have? (1/sqrt(n))

Offline twade

  • PTeam Member
  • PixInsight Old Hand
  • ****
  • Posts: 445
    • http://www.northwest-landscapes.com
Re: ImageIntegration and master dark
« Reply #17 on: 2009 September 21 10:57:24 »
To all,

I'm just beginning to process my first images taken with a Canon 5D Mk 2.  I would typically do the calibration in Maxim DL, but it's converting my *.dng files to gray-scale for some odd reason.  I created a master Bias frame based upon the recommendations from the table on page 1.

Are these steps correct?

1. Subtract the master Bias from each Dark frame,
2. Create a master Dark,
3. Subtract the master Bias from each Flat frame,
4. Subtract the master Dark from each Flat frame,
5. Create a master Flat,
6. Subtract the master Bias from each Light frame,
7. Subtract the master Dark from each Light frame,
8. Divide the master Flat from each Light frame,
9. Integrate the Light frames.

Does this sound correct?

Thanks,

Wade

Offline Simon Hicks

  • PixInsight Old Hand
  • ****
  • Posts: 333
Re: ImageIntegration and master dark
« Reply #18 on: 2009 September 21 11:26:26 »
Hi Wade,

Regarding Step 4: You don't usually want to subtract the MasterDark from the Flat. The Flats and Darks are usually taken at completly different temperatures and at different exposure times.

If your Flats are taken in say 1/10th of a second, then there's no need to take any Dark frame from it....its not had time for any dark signal to build up. If however you are doing say some narrow band work and your Flats are much longer....maybe 1 minute....then the build up of Dark signal might be significant. In this case you would want to create some completely seperate Darks.....ones that are at the same temperature and duration as the Flats....I think people call these DarkFlats. You then make a MasterDarkFlat and subtract that from the Flats.

But as I said, you usually don't need to do this.

Cheers
          Simon

Offline twade

  • PTeam Member
  • PixInsight Old Hand
  • ****
  • Posts: 445
    • http://www.northwest-landscapes.com
Re: ImageIntegration and master dark
« Reply #19 on: 2009 September 21 11:37:53 »
Simon,

Quote
If your Flats are taken in say 1/10th of a second, then there's no need to take any Dark frame from it

Excellent point. 

My Flats were 1/15 of a second so there's definitely no need to subtract a master Dark frame.  I would like to get the Flat times to around 2-3 seconds so I don't capture the shutter, but it is hard at f/4.0 and ISO 800.  :(

Thanks,

Wade

Offline Carlos Milovic

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2172
  • Join the dark side... we have cookies
    • http://www.astrophoto.cl
Re: ImageIntegration and master dark
« Reply #20 on: 2009 September 21 13:32:42 »
Hi Wade

First of all, if you are going to perform this operations with PixelMath, take into account that you should "never" rescale the result.

So, if a bias or dark sustraction yields any negative value, it should be truncated to 0. This is the reason why applying them to indivual frames intead to the master dark or "master light". This way these anormaly out of range values have a lesser impact on the combined image.

Another point to note is that darks should have the same exposure time. If not, you may rescale them by the ratio of exposure times, given that the temperature was the same.

Finally, a master flat is normaliced. This means, it has an average value of 1, so dividing by it won't change the overall flux. In practice, just do: $T*Median(flat)/flat

Regards,

Carlos Milovic F.
--------------------------------
PixInsight Project Developer
http://www.pixinsight.com

Offline twade

  • PTeam Member
  • PixInsight Old Hand
  • ****
  • Posts: 445
    • http://www.northwest-landscapes.com
Re: ImageIntegration and master dark
« Reply #21 on: 2009 September 21 14:08:04 »
Carlos,

Quote
First of all, if you are going to perform this operations with PixelMath, take into account that you should "never" rescale the result.

Thanks for pointing this out.  I had planned to ask about it in my original post but forgot.  :-[

Thanks for the other points too.  I look forward to calibrating the images and seeing the results. If the image is acceptable, I'll post it in the gallery in a few days.  With this being my first time using a DSLR, it may turn out to be very bad.  :)

Wade

Offline Carlos Milovic

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2172
  • Join the dark side... we have cookies
    • http://www.astrophoto.cl
Re: ImageIntegration and master dark
« Reply #22 on: 2009 September 21 16:04:59 »
Why don't you use DeepSkyStacker for this task? At least, meanwhile it has not been automatized on PI :)
Regards,

Carlos Milovic F.
--------------------------------
PixInsight Project Developer
http://www.pixinsight.com

Offline mmirot

  • PixInsight Padawan
  • ****
  • Posts: 881
Re: ImageIntegration and master dark
« Reply #23 on: 2009 September 21 21:15:59 »
Why don't you use DeepSkyStacker for this task? At least, meanwhile it has not been automatized on PI :)

It will be nice when PI does this too.

Max

Offline Niall Saunders

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1456
  • We have cookies? Where ?
Re: ImageIntegration and master dark
« Reply #24 on: 2009 September 23 06:44:14 »
This is the way I like to think about all of my astro data :-

Every image from my CCD contains ADU values on a pixel-by-pixel basis

Each ADU value is the result of 'summing' THREE factors:-
A value generated by the actual number of photons detected at that pixel site
A value generated by thermal activity at that pixel site
A value generated by reading out and converting the electric charge at that pixel site

The first value is really the only information of any use. The value should be a linear scaled relationship to the actual number of photons received.

The second value should be generated whether or not any photons are detected. In other words the value would be the same whether or not the lens cover was on.

The third value is more or less constant irrespective of exposure time. It is purely a function of the electronics involved in turning the charge stored at each pixel site into a digital value that can be sent to a PC.

A typical 'Light Frame' contains all three components.

A typical 'Dark Frame' contains only the last two components.

A typical 'Bias Offset Frame' contains only the third component.

So, if a BiasOffset is subtracted from a Light, the result would be a frame that contained only the first and second components.
Similarly, if a BiasOffset was subtracted from a Dark, the result would contain only the second component.
And so, if the the two steps described above were implemented, and the second result was subtracted from the first, then the result would contain 'only' the first component - which is, after all, the 'key component' that we were after in the first place !!

However, there is an easier way, due to the fact that the third 'BiasOffset' result is already present in BOTH of the first two results. So, surely all that is required is to subtract the second component from the first? In a single step the BiasOffset is eliminated, and the result is the 'key component' that we were after in the first place?

However, I accept the fact that the BiasOffset data can be used to 'scale' a Dark frame - assuming that a user could not be bothered to acquire a Dark frame at the same exposure setting as was used for the Light frame. However, this is NEVER an acceptable compromise for me. After all, the BiasOffset frame is (must be) 'statistically noisy' - that is just the simple nature of ALL of our data. As is the process of 'scaling' a longer exposure Dark to suit a shorter exposure Light. If you do this, then you ARE introducing another stage of 'statistical assumptions' - or 'noise', as it is called !! So, for me, I ALWAYS use Darks that were as close to identical to the Lights in terms of exposure time and temperature. That way I can be confident that I did my best to NOT introduce any 'extra' noise.

And, if I use that approach, I need NEVER be concerned with BiasOffsets - this component is eliminated without explicit handling.

I therefore now need a method of eliminating 'statistical noise' from my set of Lights and Darks. And the only assumption that I can make is that each of my Lights should contain a Dark component that will remain, statistically, more or less 'constant' across every Light in the data set. And, if I could establish what that 'statistical' Dark component was, I could 'subtract' it from every Light - thus giving me the best chance of accessing the actual 'photonic data' that I am after.

The simple 'take a Light, then take a Dark, and then subtract the Dark from the Light' noise reduction process found in some DSLR cameras seems to be an ideal approach, and it does produce usable results. However, it is somewhat intuitive to expect better results if 'several' images were 'averaged together' - and this is indeed correct.

And the 'multiple images' approach applies equally to Lights and Darks. In other words, take lots of Darks, combine them statistically to produce a far better MasterDark, and then subtract the MasterDark from each of the Lights - to create a data set of CalibratedLights. Then align and combine the CalibratedLights to give a final MasterLight - which you then thrash into a muddy smudge with PI (well, that is how my image data always seems to end up  :'( )

I tend to always use 'Median Combine' for the creation of my MasterDark - I learned that (correctly, I hope) from HAIP/AIP4WIN. And I also learned that there is a statistical improvement if I use at least 11 Darks compared to 1, or 3, or 5 - and that 33 darks would be even better. So, I ususally aim for 33 darks (leaving my imager running overnight, if necessary).

And, somewhere, I read that an ODD number of darks in the final mix for Median Combine was better than an even number - so I conform to that as well (and I never post-process on a Tuesday, or if a raven flies over my observatory in the daytime). And yes, I have no idea why, or if, these requirements are compulsory - but I need ALL the luck I can get  ;D

If anybody wants a long-winded personal understanding of Flats and FlatDarks, let me know  ::)

Cheers,
Cheers,
Niall Saunders
Clinterty Observatories
Aberdeen, UK

Altair Astro GSO 10" f/8 Ritchey Chrétien CF OTA on EQ8 mount with homebrew 3D Balance and Pier
Moonfish ED80 APO & Celestron Omni XLT 120
QHY10 CCD & QHY5L-II Colour
9mm TS-OAG and Meade DSI-IIC

Offline Carlos Milovic

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2172
  • Join the dark side... we have cookies
    • http://www.astrophoto.cl
Re: ImageIntegration and master dark
« Reply #25 on: 2009 September 23 06:54:35 »
Hi Niall

You still need the bias for the flats ;)

An a quick note... sometimes it is said that darks represent not thermal noise, but thermal flux or current. It is a kind of noise, true, but statistically it is very predictable, and follows a pattern each time. That's why we want to create a master dark, averaging the pure gaussian noise that becomes with any lecture by the sensor. And this is the very reason why scaling darks works.
Regards,

Carlos Milovic F.
--------------------------------
PixInsight Project Developer
http://www.pixinsight.com

Offline Niall Saunders

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1456
  • We have cookies? Where ?
Re: ImageIntegration and master dark
« Reply #26 on: 2009 September 23 09:28:31 »
Quote
You still need the bias for the flats

OK - on purpose, I ignored the thorny issue of Flats in my last post.

However, I consider the 'calibration' of Flats to be an IDENTICAL process to the calibration of Lights. After all Flats ARE 'Lights', it is just that they are exposed to a uniform light source, not the astronomical object itself.

In other words I calibrate Flats, and FlatDarks in EXACTLY the same way as I calibrate Lights and (Light)Darks - and I absolutely do NOT therefore need BiasOffsets.

Yes, you 'could' use BiasOffsets in place of FlatDarks - but only if your Flat(Light) exposure times are MUCH shorter than normal - leaving the FlatDark with no effective thermal noise in the fiorst place.

The 'subrtaction' of a MasterFlatDark from a Flat precisely eliminates the thermal Dark and readout BiasOffset components from the Flats - leaving ONLY the photonic components for further processing. And, because no image registration is required, you ONLY need ImageIntegration - and for that I use a very simple non-weighted AVERAGE COMBINE with no noise reduction and no pixel-rejection.

I do use a normalisation process once the (median combined) MasterFlatDark has been subtracted from every Flat, and once these DarkSubtractedFlats have been (average) combined to a MasterFlat, whereby the image is linearly rescaled such that the Median ADU value equals 1.0000 - finally giving a NormalisedMasterFlat.

This NormalisedMasterFlat is then DIVIDED into every DarkSubtractedLight, and then every FlatDividedDarkSubtractedLight is passed to StarAlignment and then to ImageIntegration (in my case I also have to CMYK deBayer prior to StarAlignment).

Fundamentally I am currently using AIP4WIN to execute all the steps up to the creation of my FlatDividedDarkSubtractedLights. AIP4WIN cannot deBayer my RAW DSI-IIC CMYK images - I had to write that code myself, in PJSR, because I didn't like anybody else's code (other than Nebulosity, which does now work at least as well as my script, providing you can find out HOW to make it work !!!)

I then take my FD_DS_Lights into PI, throw them at my CMYK deBayer PJSR, then take the output files from that process and throw them at StarAlignment, and then take the files created from THAT step into ImageIntegration.

As I said earlier, I usually then just delete the final image after spending no less than three days swearing at it and cursing all of you guys for being 'better' than me when it comes to using PI   :yell: :yell: :yell:

So - given my further explanations above - can someone explain to me how, and why, (or 'if') I should still be using BiasOffsets in my processing? (By the way, I DO - nearly always - take 60 BiasOffset frames anyway - just in case I ever learn that I SHOULD be using them  :angel:

My current feeling is that they are ONLY needed if you have to scale a set of temperature-matched darks to fit with different exposure times.

HTH
Cheers,
Niall Saunders
Clinterty Observatories
Aberdeen, UK

Altair Astro GSO 10" f/8 Ritchey Chrétien CF OTA on EQ8 mount with homebrew 3D Balance and Pier
Moonfish ED80 APO & Celestron Omni XLT 120
QHY10 CCD & QHY5L-II Colour
9mm TS-OAG and Meade DSI-IIC

Offline Carlos Milovic

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2172
  • Join the dark side... we have cookies
    • http://www.astrophoto.cl
Re: ImageIntegration and master dark
« Reply #27 on: 2009 September 23 11:54:34 »
<vbg> Of course, if you follow that procedure, you don't need bias frames. But, is quite time consuming, isn't? ;) Also, you have to control very carefully the temperature, to avoid substancial diferences between darks and lights.
(Having said that, I'm almost as paranoid as you with image calibration.)

BTW, I spend nearly three days with my images too ;)
Regards,

Carlos Milovic F.
--------------------------------
PixInsight Project Developer
http://www.pixinsight.com

Offline mmirot

  • PixInsight Padawan
  • ****
  • Posts: 881
Re: ImageIntegration and master dark
« Reply #28 on: 2009 September 23 11:56:18 »
You don't have to take dark frames for flats.  
Unless your flats are minites or more there should not be much dark current to effect the finial image.
Perhaps this is not true with a warm DLSR , but in cooled cameras < 30 sec the bias is about the same.

It is much more covinent and really just as accurate to get a master bias than making masters darks for each filter exposure.

Remember the magnitude of the difference in ADUs of the bias/flat dark is very small compared to the Flat at 20-40K ADUs.

You can even do very well by just subtracting the average value of single bias frame from the flat before dividing the flat into the light frame.


I use a master bias for many weeks. No flat darks here. I am just as neurotic as everyone here. The math and experience has supported this approach by most imagers.

Max


Offline Niall Saunders

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1456
  • We have cookies? Where ?
Re: ImageIntegration and master dark
« Reply #29 on: 2009 September 23 12:54:41 »
Hi Max, Carlos,

And now I am beginning to see some of my beliefs held by others as well.

When I am taking Flats, I am hoping to get ADU values to peak around the 50% full-scale area (i.e. 32,000). When I am using my light-box on my Moonfish ED80, this is easy - I can easily adjust the 12V source that feeds the single torch bulb inside the baffle assembly. But I do this whilst still trying to ensure exposure times of around 2 seconds.

Why 2 seconds?

Well, it seemed to me that 2s exposures actually DO have a noticeable thermal component in them. They are 'long' enough to acquire 'some' thermal noise. And, consequently, this noise can be effectively eliminated by careful acquisition of FlatDarks - at the same temperature, and using the same 2s exposure time. And, the acquisition method stands me in good stead, by way of repetitive practice, when I go back to HaLRGB image acquisition using my DSI-IIPro and filter wheel. After all, the acquisition of Flats through the Ha filter will be significantly longer than 2s (!!), so will DEFINITELY need FlatDarks to be used.

Recently though, whilst imaging at 2000mm FL through the main LX90 OTA, without an adjustable lightbox (I just use layers of white Tee-shirt and crumpled white bin-liner - not yet having been convinced as to the effectiveness of this approach, but still experimenting), and using the twilight sky as an illumination source, I found that my exposure times wer FAR, FAR shorter than 2s.

So, I probably could have just used BiasOffsets instead of FlatDarks - they probably contained the same data (statistically, anyway). IN any case both BiasOffsets and (short-exposure) FlatDarks are equally as easy to obtain as each other. Neither really takes any 'longer' to acquire - the main time factor simply being the transfer download time - NOT the exposure time.

What I should have done is to have taken the BiasOffsets (I know that I said earlier that I 'always do' - but I confess to momentary lapses into 'utter laziness') and compared them to the slightly longer FlatDarks. In fact it might make for a mildly interesting experiment - for those evenings where watching paint dry just becomes too exciting. I would expect that, given fixed temperatures, there is a point in the exposure-time axis beyond which the Median value of a lens-capped exposure (call it BiasOffset OR FlatDark at this stage) does start to increase away from a base-line. Which would mean that, from that exposure time onwards, any Flats acquired would require true FlatDarks to be subtracted, not just BiasOffsets.

But, I avoid all of that by just sticking to 2 second exposures for the moment - where dark current definitely IS a factor, and is a factor that can (theoretically) be removed by 'standard' statistical methods.

Which does then leave one question, a point that I raised some weeks ago, and a point that someone (was it you, Simon >) suggested would not work, but a point that you, Max, seem to agree with me on. And that is whether it is acceptable to simply use a 'synthetic' MasterBiasOffset frame - i.e. a frame that relies on a SINGLE adu value, for ALL pixels, where the ADU value chosen is the Median value of a statistically valid set of 'actual' BiasOffset frames?

Certainly, when I have implemented either a Median combine or an Average combine of, say, thirty-odd BiasOffsets, and then analysed the resultant MasterBiasOffset, the Standard Deviation of the ADU values in the master frame has been so negligible as to suggest that using a single ADU value would be perfectly acceptable.

However, I have never pursued this, as I don't actually 'use' MasterBiasOffsets in my calibration routines at all. From memory, I seem to recall that HAIP suggests that this 'synthetic' approach to MasterBiasOffsets is perfectly acceptable - and may even be 'better' than an 'acquired' master frame.

Can anybody put forward arguments for, or against, the idea? Does anybody have experience with this approach?

Cheers,
Cheers,
Niall Saunders
Clinterty Observatories
Aberdeen, UK

Altair Astro GSO 10" f/8 Ritchey Chrétien CF OTA on EQ8 mount with homebrew 3D Balance and Pier
Moonfish ED80 APO & Celestron Omni XLT 120
QHY10 CCD & QHY5L-II Colour
9mm TS-OAG and Meade DSI-IIC