Author Topic: Evaluating images for noise, signal and more?  (Read 5200 times)

Offline chrisvdberge

  • PixInsight Addict
  • ***
  • Posts: 104
Evaluating images for noise, signal and more?
« on: 2013 September 11 03:00:00 »
I'm trying to decide for myself what the best ISO is for me to use my Nikon D7000 DSLR.
Now I'm wondering what the best way is to do so.

I looked at 2 frames with the same exposure length on the same object but one with ISO800 and the other with ISO1600.
First I looked at star sizes, and as expected the stars are smaller with ISO800.
Secondly I used the NoiseEvaluation Script to check the noise, and ISO800 has almost half the noise of ISO1600.

However, I'm wondering if this is the correct way of judging the images, since I'm only looking at noise right now and surely I should be looking at signal as well shouldn't I? If ISO1600 would give me more than 2 times the signal it still would be worth it to use it over ISO800?

Furthermore I'm wondering if there is a way to check the number of clipped pixels in the original frames? I'm particularly interested in keeping colors in stars, so being able to see this would be very helpful.

Any other characteristics I should be looking at to judge what the best ISO and exposure time would be?

Offline Physicist13

  • Newcomer
  • Posts: 36
Re: Evaluating images for noise, signal and more?
« Reply #1 on: 2013 September 11 04:24:11 »
have a look at Craig Starks website (starklabs). He has detailed method to evaluate the noise and gain. Hence you get the SNR out of it.

Offline pfile

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 4729
Re: Evaluating images for noise, signal and more?
« Reply #2 on: 2013 September 11 09:48:11 »
well, the only way to get more signal is to go to a lower f/ ratio, or make longer exposures. both put more photons per square area onto your sensor.

the issue of ISO in DSLRs can be complicated. in the end though, changing the ISO does not change the number of photons that hit the sensor, whihc is the only thing that can increase SNR "for real". the main reason to choose a particular ISO is to be where the gain (electrons/DN) is nearest to 1. in other words, if you are at a low ISO it may take 10s (or more) of electrons to register as a single increment of the output of the analog to digital converter.  in this situation, you are basically wasting electrons. also, as the ISO goes down the sensor read noise destroys more of your data, since your DSO data is crammed into the least significant bits of the output.

the rule of thumb about getting the back-of-camera histogram "well-detached" from the left edge of the graph is intended to help you get your DSO data past the point where the read noise is clobbering the data.

there are arguments for using ISOs that represent less than unity gain (lower ISOs): at higher ISO, the dynamic range is smaller. so you may saturate stars more quickly. also if you have a lot of light pollution, your exposures might saturate too early to capture any DSO signal at higher ISOs. in other words, you're forced into short exposures by the skyglow, so you have to reduce ISO and expose longer in order to register anything meaningful.

the arguments for using ISOs above unity gain would be if your exposures need to be so ridiculously short (like using a camera lens on a fixed tripod) that you would capture no DSO signal at all at an ISO near unity gain.

the thing to watch out for is that past a certain point, the ISO control stops being an analog gain control and becomes a digital gain control. in that situation you are just losing data since the camera is simply multiplying the output of the A/D converter by some fixed number - probably just shifting the bits to the left. you can do better than that in postprocessing software.

besides craig stark's site, roger clark's website has a lot of information on this topic.

rob

Offline NGC7789

  • PixInsight Old Hand
  • ****
  • Posts: 391
Re: Evaluating images for noise, signal and more?
« Reply #3 on: 2013 September 11 10:20:44 »
I have found this writeup very helpful in understanding the tradeoffs between exposure length, iso, etc and the potential for imaging in light polluted skies.

http://www.samirkharusi.net/sub-exposures.html

Samir has many other interesting articles and reviews on his site too.

Offline chrisvdberge

  • PixInsight Addict
  • ***
  • Posts: 104
Re: Evaluating images for noise, signal and more?
« Reply #4 on: 2013 September 11 11:54:08 »
Thx all for the comments on signal / noise. i undersand (also based on this article/test http://www.clarkvision.com/articles/digital.sensor.performance.summary/ ) that the best ISO to choose is where the 'Unity gain' of the camera is. However, the part about 12bit/14bit camera's and the read out noise being lower at higher ISO get's me confused a bit.

Furthermore, I'm still interested to see if there is a way to see the amount of 'clipped' pixels (per channel)?
I also tried ImageJ to analyze a RAW sub, but it only shows me values up to 255, why is that? Shouldn't this be 14bit?

Offline pfile

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 4729
Re: Evaluating images for noise, signal and more?
« Reply #5 on: 2013 September 11 15:07:31 »
well, i don't know about ImageJ or what it's doing. i suppose it all comes down to this: different software packages probably do different things with the raw data from the file. for instance, knowing that the particular camera model is 12- or 14-bit, the software may rescale the data such that 12'b1111_1111_1111 or 14'b11_1111_1111_1111 is mapped to 1.0 since you'll never see a value higher than either of those in the raw data.

pixinsight does not do this - if you open a 14-bit CR2 or NEF as raw or raw cfa and then debayer it with the Debayer process, a completely overexposed pixel will have the value 0.25 (0x3fff/0xffff).

which leads me to how to find what pixels are saturated in your image; if you open pixelmath and put in the expression
Code: [Select]
iif($T>=0.25,1,0)

and set it up to create a new greyscale image, then drag the triangle to your target image, you'll get a map of all the pixels that were blown out (assuming a 14-bit camera). you can of course lower the value you're comparing against - it's likely that pixels that are near saturation are in the non-linear area of their response, so the data there is corrupted as well.

rob