Hi All,
I know that for objects like M42 and M31, HDR techniques are common, and are easy to do with Pixi. I have been thinking recently that it makes sense to extend this process to stars themselves. My logic is this....
To obtain colour in stars they cannot be overexposed or else they will have a value of 65535 in each colour and will therefore be white. So we must not overexpose stars. Pretty tough job with very bright stars and long integration times. The sensor itself plays a critical role here, star must be bright enough to obtain a reasonable SNR without saturating the pixel. The normal metric for this is Full Well/Read Noise, but this is overly optimistic I feel. The usual definition of 'detected' is an SNR of 3. Now for an SNR of 3, and assuming the only noise sources of read noise and photon shot noise then the minimum signal detectable is a lot more than the read noise. My camera has 10e- read noise, with a 100ke- full well roughly. The signal required for an SNR of 3 is around 35e-, which results in a more realistic dynamic range of 100k/35 = 2900. If you express this in magnitudes (as compared to dB) using m=2.5log(DR) then that corresponds to 8.7 magnitudes. Thats not really very much at all, considering long exposures may reveal stars fainter than 15th mag, whilst there are stars around mag 2 in this picture perhaps. So surely it would be nice to use the HDR tools to keep the stars from reaching saturation and thus preserve accurate colour for many more stars.
I took a test shot of the hyades with 2s, 60s and 5 min exposures as a test. Calibrated (no flats available) and then I ran them through the HDR composition tool, unfortunately the output was not pretty.
Do any of you Pixinsight guru's know how to deal with this issue? Has anyone tried this?
Thanks
Paul