This is the way I like to think about all of my astro data :-
Every image from my CCD contains ADU values on a pixel-by-pixel basis
Each ADU value is the result of 'summing' THREE factors:-
A value generated by the actual number of photons detected at that pixel site
A value generated by thermal activity at that pixel site
A value generated by reading out and converting the electric charge at that pixel site
The first value is really the only information of any use. The value should be a linear scaled relationship to the actual number of photons received.
The second value should be generated whether or not any photons are detected. In other words the value would be the same whether or not the lens cover was on.
The third value is more or less constant irrespective of exposure time. It is purely a function of the electronics involved in turning the charge stored at each pixel site into a digital value that can be sent to a PC.
A typical 'Light Frame' contains all three components.
A typical 'Dark Frame' contains only the last two components.
A typical 'Bias Offset Frame' contains only the third component.
So, if a BiasOffset is subtracted from a Light, the result would be a frame that contained only the first and second components.
Similarly, if a BiasOffset was subtracted from a Dark, the result would contain only the second component.
And so, if the the two steps described above were implemented, and the second result was subtracted from the first, then the result would contain 'only' the first component - which is, after all, the 'key component' that we were after in the first place !!
However, there is an easier way, due to the fact that the third 'BiasOffset' result is already present in BOTH of the first two results. So, surely all that is required is to subtract the second component from the first? In a single step the BiasOffset is eliminated, and the result is the 'key component' that we were after in the first place?
However, I accept the fact that the BiasOffset data can be used to 'scale' a Dark frame - assuming that a user could not be bothered to acquire a Dark frame at the same exposure setting as was used for the Light frame. However, this is NEVER an acceptable compromise for me. After all, the BiasOffset frame is (must be) 'statistically noisy' - that is just the simple nature of ALL of our data. As is the process of 'scaling' a longer exposure Dark to suit a shorter exposure Light. If you do this, then you ARE introducing another stage of 'statistical assumptions' - or 'noise', as it is called !! So, for me, I ALWAYS use Darks that were as close to identical to the Lights in terms of exposure time and temperature. That way I can be confident that I did my best to NOT introduce any 'extra' noise.
And, if I use that approach, I need NEVER be concerned with BiasOffsets - this component is eliminated without explicit handling.
I therefore now need a method of eliminating 'statistical noise' from my set of Lights and Darks. And the only assumption that I can make is that each of my Lights should contain a Dark component that will remain, statistically, more or less 'constant' across every Light in the data set. And, if I could establish what that 'statistical' Dark component was, I could 'subtract' it from every Light - thus giving me the best chance of accessing the actual 'photonic data' that I am after.
The simple 'take a Light, then take a Dark, and then subtract the Dark from the Light' noise reduction process found in some DSLR cameras seems to be an ideal approach, and it does produce usable results. However, it is somewhat intuitive to expect better results if 'several' images were 'averaged together' - and this is indeed correct.
And the 'multiple images' approach applies equally to Lights and Darks. In other words, take lots of Darks, combine them statistically to produce a far better MasterDark, and then subtract the MasterDark from each of the Lights - to create a data set of CalibratedLights. Then align and combine the CalibratedLights to give a final MasterLight - which you then thrash into a muddy smudge with PI (well, that is how my image data always seems to end up
![Cry :'(](http://pixinsight.com/forum/Smileys/default/cry.gif)
)
I tend to always use 'Median Combine' for the creation of my MasterDark - I learned that (correctly, I hope) from HAIP/AIP4WIN. And I also learned that there is a statistical improvement if I use at least 11 Darks compared to 1, or 3, or 5 - and that 33 darks would be even better. So, I ususally aim for 33 darks (leaving my imager running overnight, if necessary).
And, somewhere, I read that an ODD number of darks in the final mix for Median Combine was better than an even number - so I conform to that as well (and I never post-process on a Tuesday, or if a raven flies over my observatory in the daytime). And yes, I have no idea why, or if, these requirements are compulsory - but I need ALL the luck I can get
![Grin ;D](http://pixinsight.com/forum/Smileys/default/grin.gif)
If anybody wants a long-winded personal understanding of Flats and FlatDarks, let me know
![Roll Eyes ::)](http://pixinsight.com/forum/Smileys/default/rolleyes.gif)
Cheers,