Author Topic: How Does One Interpret the Noise Evaluation Statistics in ImageIntegration?  (Read 13882 times)

Offline twade

  • PTeam Member
  • PixInsight Old Hand
  • ****
  • Posts: 445
    • http://www.northwest-landscapes.com
To all,

Is there a tutorial or forum response that describes how to interpret the noise evaluation data in ImageIntegration?  For example, when tweaking the sigma values, what final statistics should I be focusing on in determining whether or not I have chosen good sigma values?  In other words, what's the iterative procedure to get the best results from my data?

Thanks,

Wade

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Hi Wade,

Noise evaluation tells you how well the integration process improves the signal-to-noise ratio in the result. A typical noise evaluation report looks like this (in this case after integration of 19 images):

Gaussian noise estimates:
ss = 1.206e-04

Reference SNR increments:
Dss0 = 3.8170

Average SNR increments:
Dss = 3.6298

The key values here are Dss0 and Dss, respectively the SNR increments with respect to the reference image (the first one in the list of integrated images) and to the whole set, taking as reference the average of all noise estimates. In this case, we have obtained an average SNR increment of about 3.63, the theoretical value being Sqrt(19) = 4.36. Due to pixel rejection, and to the fact that noise is not constant across the whole data set, achieving the theoretical SNR increment of Sqrt(N) is impossible with real data. Noise evaluation helps you to optimize your pixel rejection parameters in order to achieve the best possible result.
« Last Edit: 2010 June 10 01:00:32 by Juan Conejero »
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Jack Harvey

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 975
    • PegasusAstronomy.com & Starshadows.com
OK I need a bit more information of a practical nature (at least for me).  If I run image integration with a certain set of settings and get numbers  for the Dss0 and Dss.  And then change the settings or rejection and get a new set.  What is the most important of these numbers to use to evaluate if I have improved my S/N of the final integrated image?
Jack Harvey, PTeam Member
Team Leader, SSRO/PROMPT Imaging Team, CTIO

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Quote
What is the most important of these numbers to use to evaluate if I have improved my S/N of the final integrated image?

Both of them provide essentially the same information: a measurement of the achieved SNR improvement. What you want is to achieve the largest possible SNR improvement with the necessary rejection of cosmic rays, plane and satellite trails, etc.

Dss0 provides a measurement of SNR improvement with respect to a "fixed" point: the reference image, which is always the first one in the list. For example, you can select the best image (to some definition of 'best') of a set as the reference. The other number provides a SNR measurement with respect to the whole set, on average, which is probably the most realistic approach. I personally prefer Dss.

As far as we have tested, these are some practical guidelines to achieve the best SNR ratios with optimal pixel rejection:

- Combination = Average provides the highest SNR improvement. Median combination is in general not recommended; median, minimum and maximum combinations have been implemented to solve very specific problems but not for production use.

- Normalization = Additive is the best option for normal light frames. Multiplicative is only required to integrate flat frames.

- Weights = Noise evaluation is the most accurate option. This weighting method utilizes a high-precision wavelet-based noise evaluation algorithm to compute optimal image weights. In general this weighting method will lead to the highest SNR improvements in the final result.

- Rejection = Winsorized Sigma Clipping is the best option for large sets (say > 8 or > 10 images).

- Rejection = Percentile Clipping is the best option for small sets (<= 5 images).

- Rejection = Averaged Sigma Clipping (Poisson model rejection) is good for sets of any sizes. For moderate to large sets sigma clipping tends to be superior.

- Rejection normalization = Scale + Zero Offset is the best option for calibrated (flat-fielded) images without strong sky gradients. To integrate flat frames the Equalize Fluxes method should be used. The same happens for normal images with strong and dissimilar gradients. However, in such cases the correct procedure is fixing the gradients before the integration.

- Note that ImageIntegration implements asymmetric pixel rejection. This means that you can optimize clipping thresholds for low and high pixels independently.

- Now the goal is to find the largest clipping factors that perform the required rejection of artifacts. In other words, we want to reject just the spurious data without damaging significant data, as far as possible. This should be implemented as an iterative procedure.

- To speed up the process, you can select a region of interest while you are trying out clipping parameters.

- Watch the final SNR estimates and try to achieve an optimal combination of good SNR + good rejection.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline mmirot

  • PixInsight Padawan
  • ****
  • Posts: 881
Quote
What is the most important of these numbers to use to evaluate if I have improved my S/N of the final integrated image?



- Rejection normalization = Scale + Zero Offset is the best option for calibrated (flat-fielded) images without strong sky gradients. To integrate flat frames the Equalize Fluxes method should be used. The same happens for normal images with strong and dissimilar gradients. However, in such cases the correct procedure is fixing the gradients before the integration.

-

1 Imaged the comet last night. Dawn was coming on fast. How would you integrate these subs?

2 I assume the lower the gausian measurement the better. Too bad you don't have Dss verse theoretical

Max

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Hi Max,

Quote
1 Imaged the comet last night. Dawn was coming on fast. How would you integrate these subs?

Then you probably have dissimilar sky gradients. The best way to integrate them is to fix the gradients on each individual frame prior to the integration. This may sound like a huge amount of work, but it isn't. Just define a good DBE instance for one of your images, save it as an icon on the workspace, and reuse it for the rest of frames. It may be a matter of half an hour.

The alternative is trying with the default Scale+ZeroOffset normalization, and if it doesn't provide the required rejection, change to Equalize Fluxes. However you won't achieve the same results as if you remove all the gradients before the integration.

Quote
2 I assume the lower the gausian measurement the better. Too bad you don't have Dss verse theoretical

Indeed, the lower the noise estimate, the better. Good point. I can add this feature to the next version of ImageIntegration.

You can use the "j" command to compute the ratio between the achieved and theoretical SNR improvements. For example, if you have integrated 10 images and the estimated SNR improvement is 2.8, you'd enter this command from PI's Processing Console:

j 2.8/Math.sqrt(10)

Let us see how that comet image turns out!
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline twade

  • PTeam Member
  • PixInsight Old Hand
  • ****
  • Posts: 445
    • http://www.northwest-landscapes.com
Juan,

Thanks!  This is exactly what I was after.  Now I can use ImageIntegration to its fullest.

Wade

Offline Niall Saunders

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1456
  • We have cookies? Where ?
Hi Juan,

Thanks for this extra information. Now some of the controls on ImageIntegration can be applied more 'scientifically'.

However, I immediately thought of the current Console output stream being 'piped' into a graphing interface - so the 'useful' image-by-image data could be quickly visualised after each iteration (perhaps even re-using the graphed data from previous iterations until, like the file-cached data, the user chooses to 'clean the slate')

This is what Nikolay and I (well, really it is all Nikolay's hard work :D) have been bashing out in the Blink/Animation script. I think PI has reached that stage where users need to be able to visualite 'data quality' information, and not just for one image, but a whole series of images. And, in this case, for a series of images under different 'processing conditions'.

It is going to be well-implemented tools like these that allow PI to move to the next stage (IMHO)

Cheers,
« Last Edit: 2010 June 10 23:47:34 by Niall Saunders »
Cheers,
Niall Saunders
Clinterty Observatories
Aberdeen, UK

Altair Astro GSO 10" f/8 Ritchey Chrétien CF OTA on EQ8 mount with homebrew 3D Balance and Pier
Moonfish ED80 APO & Celestron Omni XLT 120
QHY10 CCD & QHY5L-II Colour
9mm TS-OAG and Meade DSI-IIC

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Quote
It is going to be well-implemented tools like these that allow PI to move to the next stage (IMHO)

I agree. My intention is to extend the current development tools to cover generation of more and more sophisticated graphical content, including both 2D and 3D representations, and more interaction with images. This will facilitate production of complex analysis tools.

As a matter of fact, Nikolay's blinking script is showing some important deficiencies in the current scripting scheme that must be addressed. For example, this script should be able to allow interaction with the image windows, instead of just freezing the whole interface. Some of these limitations are also applicable to the PCL, to some extent. As you can see, there's a lot of work to do in the development tools. We must have good foundations before building the walls.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Christoph Puetz

  • PixInsight Addict
  • ***
  • Posts: 150
  • Peterberg Observatory (Germany, Saarland)
    • Fotos
I follow this discussion really interested. But as a newbie  :-[ in advanced processing, do I understand this
correct: Basically it is a "trial and error" effort to find the best sigma clipping adjustments during integration ?
@Juan: Your discussion of the basic possibilites of the integration form was very helpful indeed for me !

Christoph
Kind regards,
      Christoph
---
ATIK 383L+, Canon EOS 450d, modified,
Canon EOS 500d, 
20" Planewave CDK, 6" APO Starfire Refractor,
Celestron 8", Skywatcher ED80,
Peterberg Observatory (www.sternwarte-peterberg.de)
PixInsight, PHD-Guiding
private URL: www.ccdsky.eu

Offline Niall Saunders

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1456
  • We have cookies? Where ?
Quote
Basically it is a "trial and error" effort to find the best sigma clipping adjustments during integration ?

Hi Cristoph,

Yes - it is more or less a 'trial and error' iterative process to decide where to set the Sigma Clipping sliders.

My current method is to adjust these whilst aiming (initially) for the total percentage of clipped pixels to be less than 1% of the overall number of pixels in the image frame. You can see these percentages at the end of the Console window output after each iteration (a percentage of low-clipped pixels, a percentage of high-clipped pixels, and the sum of these as an overall percentage).

However, this '1% target' is a purely arbitrary figure - chosen by me 'out of thin air' !!

I am also willing to take into consideraation the ability of the Integration process to eliminate satellite and aircraft trails from my Lights, as well as cosmic ray hits from my Darks. So, if necessary, I will adjust the Clipping slider to a point where it NO LONGER eliminates a given anomally (like a satellite trail, for example). I note the value that this happens, and the percentage of pixels that are clipped when the trail is NOT eliminated. Then I adjust again and see how many 'extra' pixels are eliminated as a result of deleting the artefact. Then I try and make 'some sort of judgement call' as to where i want to leave the slider, for best effect.

The great thing about this is that, once you get a 'feel' for using the sliders, they are quite quick and easy to set up - especially because ImageIntegration can repeat integrations VERY QUICKLY if you enable the file 'data-caching' feature.

I actually saw the whole process click into place when I was integrating a series of 300s Darks. I had never really been 'aware' of a Cosmic Ray Strike, until I saw ImageIntegration eliminate these 'little worms' - worms that obviously had NOTHING to do with the thermal behaviour of my CCD. I then split my group of Darks into smaller and smaller 'half-samples' until I was actually able to 'see' the frame that had a cosmic ray hit on it. A very 'enlightening' experience.

My suggestion would be to take a group of images (Lights or Darks) and just start with the clipping sliders 'maxed out', and start 'iterating' as you drop the sliders (one at a time - just click the 'Clip High' box to begin with, and don't bother generating the 'integrated image' at this stage - you don't need it. All you are interested in is the single 'clipped data' image, which you should hit with an AutoSTF every time it is generated). As you lower the slider, you clip more and more pixels, until - eventually - you will start clipping REAL DATA.

Once you start seeing 'structure' in your clipped image you have gone WAY TOO FAR. Have a look at the 'percentages' again, and then try and make a decision on how far 'back' you want to go with the sliders.

Sorry - I can't explain the process much better, and certainly not from a 'scientific' basis. I know how the actual statistical mathematics works - or, at least, Juan graciously took the time to explain it to me here on the Forum aseveral months ago. But all I can do is use that knowledge to help me 'tweak sliders' as I have suggested you do ::)

Cheers,
Cheers,
Niall Saunders
Clinterty Observatories
Aberdeen, UK

Altair Astro GSO 10" f/8 Ritchey Chrétien CF OTA on EQ8 mount with homebrew 3D Balance and Pier
Moonfish ED80 APO & Celestron Omni XLT 120
QHY10 CCD & QHY5L-II Colour
9mm TS-OAG and Meade DSI-IIC

Offline Christoph Puetz

  • PixInsight Addict
  • ***
  • Posts: 150
  • Peterberg Observatory (Germany, Saarland)
    • Fotos
Hi Niall, thank your for your reply.
In the first months of working with PI I used to apply the default values. But now I noticed that - for my Canon Camera -
a more "aggressive" clipping really gives better results (without loosing details).
So I will use your suggestion of iteratively lowering the clipping values which works very fast indeed !
Still have to learn a lot - but this is real fun with PI. Since I use this software my pictures are getting better; I never noticed
before that I lost a lot of data when I used some other freeware, because I never had such a fine control over the data.

Thanks for your help !!
Christoph
Kind regards,
      Christoph
---
ATIK 383L+, Canon EOS 450d, modified,
Canon EOS 500d, 
20" Planewave CDK, 6" APO Starfire Refractor,
Celestron 8", Skywatcher ED80,
Peterberg Observatory (www.sternwarte-peterberg.de)
PixInsight, PHD-Guiding
private URL: www.ccdsky.eu

Offline mmirot

  • PixInsight Padawan
  • ****
  • Posts: 881

I have noted mostly we need to exclude hot pixels.
Cold pixels can usually be left alone at a lower level.

As Niall says use cosmic ray hits and satellite trails as an indicator. You will see them in the bad pixel high map.

Max

Offline Niall Saunders

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1456
  • We have cookies? Where ?
There is also then the possibility of analysisng the rejection_high and rejection_low pixel images.

If memory serves me correctly - and Juan did answer this some time ago, but I will need to search back in the Forum posts - the 'clipped images' that are generated contain an 'ADU' value (well, it isn't actually anything to do with 'ADU' as such - but let's just call it that because I can't think of a more appropriate name!!) at each pixel location that represents the 'number' of source images that had to have their ADU value excluded (clipped) out of the data used to generate the final (Integrated) ADU value at that location.

As with all PI images, the 'ADU' value will be in the [0.0, 1.0] range - this time the range refers to the total number of images in the original data set. So, for example, an ADU value of 0.125 at a given pixel site in the rejection_high image, from an ImageIntegration run on 32 source images, would mean that 4 (0.125 x 32) images had their pixel data 'rejected' at that pixel site.

On a data set that I have been working on recently, a satellite trail on the rejection_high image had an ADU value of 0.016666666 and there were a total of 60 images in the data-set. As expected, multiplying these together tells me that 'exactly' ONE image contained the satellite trail (I would have been MOST concerned if the ADU had been anything other than 0.016666 ::))

At one point I had thought that the rejection images might have helped in the creation of an appropriate DefectMap image - but this is NOT possible - they simply do NOT contain ANY information that helps in that task.

What is also available in each ImageIntegration run is the percentage, and number, of pixels being clipped from EACH image (as well those statistics from the 'overall run', as described above). Copying the Console output to a TXT file, and then opening that in the likes of MS Excel will also help visualise any images that are 'away from the median'. It can be useful to then open those and try to establish 'why' they are so different. It may just be that you have let a 'rubbish' image slip through the net - and TOTAL exclusion of that one image may make the Clipping process MUCH tighter on the remaining data.

At the end of the day, you will only get 'mediocre' if you blindly let the software do ALL the work. Take some time and just double-check the steps you are asking PI to complete for you. It is a LOT easier working with the BEST of your data - compared with the struggle of trying to 'fix-up' a snafu'd image that need not have been as bad had you only eliminated the 'rubbish' right from the outset.

Cheers,
Cheers,
Niall Saunders
Clinterty Observatories
Aberdeen, UK

Altair Astro GSO 10" f/8 Ritchey Chrétien CF OTA on EQ8 mount with homebrew 3D Balance and Pier
Moonfish ED80 APO & Celestron Omni XLT 120
QHY10 CCD & QHY5L-II Colour
9mm TS-OAG and Meade DSI-IIC

Offline pfile

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 4729
so in real life, how close should one expect to get to the theoretical numbers? i have a stack of 97 dslr images (120s exposures of the NGC7000 area). after tweaking the rejection thresholds i only managed to get up to around Dss of 4.0,4.4,4.0 (R,G,B). while i was working on a region of interest in a brighter part of the frame, i was at about 6.0,3.8,4.0 if i remember correctly. does this just mean that my subframes are underexposed? my histograms were very far to the right but i was working during a full moon and under badly light polluted and humid skies.