Quantification of image noise, literature

jan.frenzel

New member
Hi everybody.

I am interested in quantifying image noise., and I could need some help.

Just some background information. I am a hobby astronomer (since 1 year) and I use Pixinsight (advanced beginner level). Recently I figured out that I could also use Pixinsight in my job. I am a materials scientist, working at the Ruhr University in Bochum. We do a lot of electron microscopy for microstructure characterization of structural and functional engineering materials. It just turned out that we can use the Pixinsight process very well to improve image quality of electron diffraction images, where we also have often quite poor signal to noise ratios.

In the next months, we will probably write a short publication on using PI for the analysis of electron diffraction pattern. Therefore, I could need some help. I would be very happy to get some feedback on the points listed below:

1) Where can I get more information on noise and noise determination of images? My impression is that noise estimation in general is a quite complex topic (I read some older posts). I know that PI often uses the k-sigma noise estimate. It would be great to read more about this procedure (citable reference?). Our image files do not contain stars. Most parts are dark grey, and there are various lines resulting from electron diffraction. I would like to be sure that this procedure yields reasonable data on noise in our images.

2) Is there a way to perform a noise estimate on an arbitrary image file (e.g. a single file which I just opened)? For me, it seems the these noise estimate values only get reported after an integration process. I also would like to have these data for selected subframes.

3) I know that there seem to be other procedures for noise estimation. For example, there is the signal to noise ratio (SNR) script by Hartmut Bornemann. One can use it to assess the SNR of the currently activated image. However, I do not know how it works. And I also would like to get reasonable quantitative data e.g. on noise in bias images. I guess a signal to noise ratio for bias images does not make sense since there actually should not be a real signal in these bias frames...

4) As a beginner user of PI I do not have a very deep knowledge on the different processes (e.g. integration with scale factors, pixel rejection algorithms, the advantages of using bias-scaled darks, arcsinhyp stretching, etc). I guess that at least some of these processes must have been described in the (citable) literature at some point. Are there any overview books or papers where I can find references?

Ok... so many things....
It would be really great to have some feedback. That would really be helpful.

Thank you very much and best regards
Jan Frenzel
 
well, at least for #2 there is a script called NoiseEvaluation which can get the noise estimate for any open image.

in addition somewhere here in the forum juan posted a version of the NoiseEvaluation script which can compare two images - i guess the images need to be normalized to each other first before comparing noise - but offhand i don't remember the thread in which he posted it.

rob
 
Here is my take.

Noise measurement of images fundamentally requires guesswork, you must guess what is noise and what is signal, and good guessing is hard to do in general.

For example, the dark background of a typical astronomical image is textured, some of this texture is in fact a combination of detector noise and photon noise from the sky background, but some is also due to unresolved stars, nebulae, and galaxies. What might look like noise in the dark background is at least partially not noise, but rather at least partially signal from these unresolved sources.

So how do you measure noise accurately? My conclusion, you don't. Rather than trying to guess you simply estimate it using a good detector/signal model.

Inputs are detector gain (g) in e-/DN, detector read noise (r) in e-, and the intensity level (l) in DN in your bias-subtracted frame that you want to find a noise value for. Note that noise in an image is significantly dependent on intensity level due to the presence of photon noise. As a result an image does not have "one noise value", rather it has many that vary with intensity level.

Given those inputs, a good estimate of the standard deviation of gaussian noise in your image at that intensity level is sqrt(r^2 + g*l) in e-. You can convert this noise value in e- to DN by dividing it by gain (g).

Derivation: Noise variance due to detector read noise equals r^2. Noise variance at intensity level (l) in DN due to photon noise equals g*l. These noise sources are independent so the noise variance of their combination equals their sum. Standard deviation equals the square root of their combined variances.
 
Hi!

Thank you for the information. One more thing... do you know good literature which covers this. E.g. books on imaging sensors?

Have a nice weekend,
Jan
 
The bit of digging about that I have done on the topic of quantifying noise in images does, indeed, suggest that it is as much guess-work as science. So I just wanted to add that an apparent conclusion based on this assessment: weighting frames for stacking using SNR is largely invalid. So I use SS + a spreadsheet. Usually (for most imaging rigs and targets) the number of stars is a good measure for weight frames, along with eccentricity and FWHM (and whatever else fits the target).

Love to hear arguments to the contrary.
 
Jan,

PixInsight developers usually refer at articles by Jean-Luc Starck (who's known for work on wavelets and sparsity). Richard Berry's "Handbook of Astronomical Image Processing" is another reference used by PI developers, but I might be wrong. Anyway, most of the articles at https://pixinsight.com/resources/ have "proper" references so maybe it's the best place to take a look for something relevant to your needs.
 
aworonow,

IMO number of stars as image assessment should be used only if FWHM is say 2 pix or more. Less than that and it can have problems due to hot pixel rejection.

For example, my binned FSQ106 setup typically has FWHM 1 pix. As seeing conditions and focus improves, FWHM decreases (0.9 pix or even smaller), and more and more stars get falsely rejected as hot pixels. This results in a smaller number of stars for the better frames. And so I've found number of stars to be not reliable on my setup.
 
Interesting thread. I'd like to add just a few resources to facilitate some useful testing work.

To demonstrate the accuracy of the noise estimation algorithms we have implemented in PixInsight, let's consider the following script:

Code:
#include <pjsr/ImageOp.jsh>

/*
 * A script to check accuracy of noise estimates computed with the
 * multiresolution support method of Jean-Luc Starck et al. (known as MRS noise
 * evaluation in PixInsight).
 *
 * Thanks to Mike Schuster for pointing out important errors in the initial
 * implementation.
 */

/*
 * This is the standard deviation of the additive Gaussian noise that will be
 * added to the target image, in the normalized [0,1] range. Change it when
 * performing several tests. Something around 0.001 can be reasonable for
 * simulation of raw deep-sky images.
 */
#define NOISE_SIGMA           0.0005

/*
 * Simulated sensor gain in e-/DN
 */
#define GAIN                  1.0

/*
 * Set to true to apply Poisson distributed noise to the image.
 */
#define APPLY_POISSON_NOISE   true

/*
 * Set to true to add Gaussian distributed noise to the image.
 */
#define ADD_GAUSSIAN_NOISE    true

/*
 * Random Gaussian deviates.
 */
function GaussianRandomDeviates()
{
   this.V1 = 0;
   this.V2 = 0;
   this.S = 0;
   this.phase = 0;

   /*
    * Returns a random deviate from a Gaussian distribution with zero mean and
    * unit standard deviation.
    */
   this.next = function()
   {
      /*
       * Marsaglia polar method.
       */
      let X;
      if ( this.phase == 0 )
      {
         do
         {
            let U1 = Math.random();
            let U2 = Math.random();
            this.V1 = 2*U1 - 1;
            this.V2 = 2*U2 - 1;
            this.S = this.V1*this.V1 + this.V2*this.V2;
         }
         while ( this.S >= 1 || this.S == 0 );

         X = this.V1 * Math.sqrt( -2*Math.ln( this.S )/this.S );
      }
      else
         X = this.V2 * Math.sqrt( -2*Math.ln( this.S )/this.S );

      this.phase = 1 - this.phase;
      return X;
   };
}

/*
 * Random Poisson deviates.
 */
function PoissonRandomDeviates()
{
   /*
    * Returns a random Poisson deviate for a given pixel value.
    */
   this.next = function( value )
   {
      if ( value < 30 )
      {
         /*
          * Implementation of the algorithm by Donald E. Knuth, 1969.
          *
          * Slow (unusable) for large values.
          */
         let L = Math.exp( -value );
         let k = 0;
         let p = 1;
         do
         {
            ++k;
            p *= Math.random();
         }
         while ( p >= L );
         return k-1;
      }
      else
      {
         /*
          * Code adapted from 'Random number generation in C++', by John D. Cook:
          *    https://www.johndcook.com/blog/cpp_random_number_generation/
          *
          * The algorithm is from "The Computer Generation of Poisson Random
          * Variables" by A. C. Atkinson, Journal of the Royal Statistical Society
          * Series C (Applied Statistics) Vol. 28, No. 1. (1979)
          * 
          * Slow (unusable) for small values.
          */
         let c = 0.767 - 3.36/value;
         let beta = Math.PI/Math.sqrt( 3*value );
         let alpha = beta*value;
         let k = Math.ln( c ) - value - Math.ln( beta );
         for ( ;; )
         {
            let u = Math.random();
            let x = (alpha - Math.ln( (1 - u)/u ))/beta;
            let n = Math.trunc( Math.floor( x + 0.5 ) );
            if ( n >= 0 )
            {
               let v = Math.random();
               let y = alpha - beta*x;
               let tmp = 1 + Math.exp( y );
               let lhs = y + Math.ln( v/tmp/tmp );
               let rhs = k + n*Math.ln( value ) - this.logFactorial( n );
               if ( lhs <= rhs )
                  return n;
            }
         }
      }
   };

   this.logFactorial = function( n )
   {
      let x = n + 1;
      return (x - 0.5)*Math.ln( x ) - x + 0.5*Math.ln( 2*Math.PI ) + 1/(12*x);
   };
}

/**
 * Estimation of the standard deviation of the noise, assuming a Gaussian
 * noise distribution.
 *
 * - Use MRS noise evaluation when the algorithm converges for 4 >= J >= 2
 *
 * - Use k-sigma noise evaluation when either MRS doesn't converge or the
 *   length of the noise pixels set is below a 1% of the image area.
 *
 * - Automatically iterate to find the highest layer where noise can be
 *   successfully evaluated, in the [1,3] range.
 */
function UnscaledNoiseEvaluation( image )
{
   let a, n = 4, m = 0.01*image.selectedRect.area;
   for ( ;; )
   {
      a = image.noiseMRS( n );
      if ( a[1] >= m )
         break;
      if ( --n == 1 )
      {
         console.warningln( "<end><cbr>** Warning: No convergence in MRS noise evaluation routine - using k-sigma noise estimate." );
         a = image.noiseKSigma();
         break;
      }
   }
   this.sigma = a[0]; // estimated stddev of Gaussian noise
   this.count = a[1]; // number of pixels in the noisy pixels set
   this.layers = n;   // number of layers used for noise evaluation
}

function main()
{
   let window = ImageWindow.activeWindow;
   if ( window.isNull )
      throw new Error( "No active image" );

   console.show();
   console.writeln( "<end><cbr>
<b>" + window.currentView.fullId + "</b>" );
   console.writeln( "Generating synthetic Gaussian+Poisson noise..." );
   console.flush();

   let G = new GaussianRandomDeviates;
   let P = new PoissonRandomDeviates;

   let view = window.currentView;
   let image = view.image;
   view.beginProcess();
   for ( let c = 0; c < image.numberOfChannels; ++c )
   {
      image.selectedChannel = c;
      image.initSampleIterator();
      do
      {
         let v = image.sampleValue();
         // Apply Poisson noise to the current pixel value.
         if ( APPLY_POISSON_NOISE )
            v = P.next( 65535*GAIN*v )/65535/GAIN;
         // Add Gaussian white noise.
         if ( ADD_GAUSSIAN_NOISE )
            v += NOISE_SIGMA*G.next();
         image.setSampleValue( Math.range( v, 0, 1 ) );
      }
      while ( image.nextSample() );
   }
   view.endProcess();

   console.writeln( "<end><cbr>
Unscaled Gaussian noise standard deviation estimates:" );
   console.writeln( "
Ch |   noise   |  count(%) | layers |" );
   console.writeln(     "---+-----------+-----------+--------+" );
   for ( let c = 0; c < image.numberOfChannels; ++c )
   {
      console.flush();
      image.selectedChannel = c;
      let E = new UnscaledNoiseEvaluation( image );
      console.writeln( format( "%2d | <b>%.3e</b> |  %6.2f   |    %d   |", c, E.sigma, 100*E.count/image.selectedRect.area, E.layers ) );
      console.flush();
   }
   console.writeln(     "---+-----------+-----------+--------+" );
}

main();

The script adds a mix of synthetic Gaussian and Poisson distributed noise to the active image. The NOISE_SIGMA macro defines the standard deviation of the generated Gaussian noise. After adding noise, the script computes noise estimates measured on the image.

If the original image has no significant noise, the computed estimates should be close to NOISE_SIGMA, ideally with errors smaller than a 1% approximately. The Poisson component degrades noise estimation.

The easiest way to use this script is to apply it to a noise-free image. By noise-free I mean an image where the standard deviation of the noise is in the range of a few DN units, say about 10-5 in the normalized [0,1] range. For your convenience, here is one such image that simulates a typical raw deep-sky image.

You can use the script also on real raw frames. In such case you should get final noise estimates close to a quadrature sum:

Code:
Sqrt( s12 + s22 )

where s1 is the standard deviation of the noise before applying the script, and  s2 is the standard deviation of the noise added by the script (the value of NOISE_SIGMA, which is 0.0005 by default).

As for the scaled noise estimation scripts mentioned above, I'll copy them here for your convenience:

For normal (non-mosaiced) frames:

Code:
/**
 * Estimation of the standard deviation of the noise, assuming a Gaussian
 * noise distribution.
 *
 * - Use MRS noise evaluation when the algorithm converges for 4 >= J >= 2
 *
 * - Use k-sigma noise evaluation when either MRS doesn't converge or the
 *   length of the noise pixels set is below a 1% of the image area.
 *
 * - Automatically iterate to find the highest layer where noise can be
 *   successfully evaluated, in the [1,3] range.
 *
 * Returned noise estimates are scaled by the Sn robust scale estimator of
 * Rousseeuw and Croux.
 */
function ScaledNoiseEvaluation( image )
{
   let scale = image.Sn();
   if ( 1 + scale == 1 )
      throw Error( "Zero or insignificant data." );

   let a, n = 4, m = 0.01*image.selectedRect.area;
   for ( ;; )
   {
      a = image.noiseMRS( n );
      if ( a[1] >= m )
         break;
      if ( --n == 1 )
      {
         console.writeln( "<end><cbr>** Warning: No convergence in MRS noise evaluation routine - using k-sigma noise estimate." );
         a = image.noiseKSigma();
         break;
      }
   }
   this.sigma = a[0]/scale; // estimated scaled stddev of Gaussian noise
   this.count = a[1]; // number of pixels in the noisy pixels set
   this.layers = n;   // number of layers used for noise evaluation
}

function main()
{
   let window = ImageWindow.activeWindow;
   if ( window.isNull )
      throw new Error( "No active image" );

   console.show();
   console.writeln( "<end><cbr>
<b>" + window.currentView.fullId + "</b>" );
   console.writeln( "Calculating scaled noise standard deviation..." );
   console.flush();

   console.abortEnabled = true;

   let image = window.currentView.image;
   console.writeln( "<end><cbr>
Ch |   noise   |  count(%) | layers |" );
   console.writeln(               "---+-----------+-----------+--------+" );
   for ( let c = 0; c < image.numberOfChannels; ++c )
   {
      console.flush();
      image.selectedChannel = c;
      let E = new ScaledNoiseEvaluation( image );
      console.writeln( format( "%2d | <b>%.3e</b> |  %6.2f   |    %d   |", c, E.sigma, 100*E.count/image.selectedRect.area, E.layers ) );
      console.flush();
   }
   console.writeln(               "---+-----------+-----------+--------+" );
}

main();

For Bayer CFA frames:

Code:
/**
 * Estimation of the standard deviation of the noise, assuming a Gaussian
 * noise distribution.
 *
 * - Use MRS noise evaluation when the algorithm converges for 4 >= J >= 2
 *
 * - Use k-sigma noise evaluation when either MRS doesn't converge or the
 *   length of the noise pixels set is below a 1% of the image area.
 *
 * - Automatically iterate to find the highest layer where noise can be
 *   successfully evaluated, in the [1,3] range.
 *
 * Returned noise estimates are scaled by the Sn robust scale estimator of
 * Rousseeuw and Croux.
 */
function ScaledNoiseEvaluation( image )
{
   let scale = image.Sn();
   if ( 1 + scale == 1 )
      throw Error( "Zero or insignificant data." );

   let a, n = 4, m = 0.01*image.selectedRect.area;
   for ( ;; )
   {
      a = image.noiseMRS( n );
      if ( a[1] >= m )
         break;
      if ( --n == 1 )
      {
         console.writeln( "<end><cbr>** Warning: No convergence in MRS noise evaluation routine - using k-sigma noise estimate." );
         a = image.noiseKSigma();
         break;
      }
   }
   this.sigma = a[0]/scale; // estimated scaled stddev of Gaussian noise
   this.count = a[1]; // number of pixels in the noisy pixels set
   this.layers = n;   // number of layers used for noise evaluation
}

/*
 * Returns a Bayer CFA frame converted to a 4-channel image. Individual CFA
 * components are written to output channels 0, ..., 3 as follows:
 *
 * 0 | 1
 * --+--
 * 2 | 3
 *
 * Where CFA element #0 corresponds to the top left corner of the input frame.
 * The output image will have half the dimensions of the input frame.
 *
 * If specified, the k0, ..., k3 scalars will multiply their respective output
 * channels 0, ..., 3.
 */
function BayerCFAToFourChannel( image, k0, k1, k2, k3 )
{
   if ( k0 == undefined )
      k0 = 1;
   if ( k1 == undefined )
      k1 = 1;
   if ( k2 == undefined )
      k2 = 1;
   if ( k3 == undefined )
      k3 = 1;

   let w = image.width;
   let h = image.height;
   let w2 = w >> 1;
   let h2 = h >> 1;
   let rgb = new Image( w2, h2, 4 );
   for ( let y = 0, j = 0; y < h; y += 2, ++j )
      for ( let x = 0, i = 0; x < w; x += 2, ++i )
      {
         rgb.setSample( k0*image.sample( x,   y   ), i, j, 0 );
         rgb.setSample( k1*image.sample( x+1, y   ), i, j, 1 );
         rgb.setSample( k2*image.sample( x,   y+1 ), i, j, 2 );
         rgb.setSample( k3*image.sample( x+1, y+1 ), i, j, 3 );
      }
   return rgb;
}

#define K0 1.0
#define K1 1.0
#define K2 1.0
#define K3 1.0

function main()
{
   let window = ImageWindow.activeWindow;
   if ( window.isNull )
      throw new Error( "No active image" );

   console.show();
   console.writeln( "<end><cbr>
<b>" + window.currentView.fullId + "</b>" );
   console.writeln( "Scaled Noise Evaluation Script - Bayer CFA Version." );
   console.writeln( "Calculating scaled noise standard deviation..." );
   console.flush();

   console.abortEnabled = true;

   let image = BayerCFAToFourChannel( window.currentView.image, K0, K1, K2, K3 );
   console.writeln( "<end><cbr>
Ch |   noise   |  count(%) | layers |" );
   console.writeln(               "---+-----------+-----------+--------+" );
   for ( let c = 0; c < image.numberOfChannels; ++c )
   {
      console.flush();
      image.selectedChannel = c;
      let E = new ScaledNoiseEvaluation( image );
      console.writeln( format( "%2d | <b>%.3e</b> |  %6.2f   |    %d   |", c, E.sigma, 100*E.count/image.selectedRect.area, E.layers ) );
      console.flush();
   }
   console.writeln(               "---+-----------+-----------+--------+" );
}

main();

As noted many times on this forum, scaled noise estimates must be used for noise-based comparisons performed among different images.
 
Hi Juan,

A couple of comments on your post and simulation.

1) You claim your sample image is both noise-free and simulates a typical raw deep-sky image. A raw image cannot be noise-free, at least with current technology and natural physics, due to the presence of both detector read noise and shot noise. So IMO a noise-free sample image cannot simulate a typical raw image.

2) Your simulation adds Poisson noise. Poisson noise is not additive. If P(x) is a Poisson distribution with expected value x, then of course mean(P(x)) equals x. So I suggest the assignment "v += P.next(v)/65535" be changed to something like "v = P.next(65535 * GAIN * v) / (65535 * GAIN)" where GAIN might be 1 e-/DN or something similar to match typical setups. This expression includes the necessary "normalized to DN to e- to DN to normalized" conversions. Also, your PoissonRandomDeviates.next() function may be too slow in practice due to the larger number of iterations required for arguments in the 0 to 65535 e- range (ie with GAIN equal to 1).

3) Your simulation scales Gaussian noise by inverse signal. I see no natural physical motivation for such scaling. Additive Gaussian noise (with zero mean) adds at all levels equally, which matches the impact of detector read noise for example.

4) With these changes, IMO there is no need for image median offsetting to preserve STF, as the impact of noise addition is mean preserving.

5) As sanity check, you might try estimating or measuring the noise in both a bias frame and a flat frame. Of course bias frame noise in e- should be slightly larger than detector read noise due to the presence of fixed pattern noise. Flat frame noise in e- should be vastly larger (ie 10 to 20x or more) due to the presence of shot noise. Of course a flat frame is an extreme example of a class of "sky limited" images, ones where shot noise dominates detector read noise. The implication here of course and again is that noise level in an image varies widely with local exposure.

6) Consider the Sony ICX834 detector, used in the QSI 6120 camera. Read noise 1.6 e-, full well 9,000 e-. An image exposed just enough to be sky limited will have about 5 e- of noise in the background (ie 3x read noise) and 95 e- of noise in the near saturated star cores (ie square root 9,000). So a typical sky-limited image would have close to a 20x range of noise across the frame.
 
Hi Mike,

Yes, a serious mistake on a basic thing. Poisson noise is not added, but applied. In fact this is just what we do in the NoiseGenerator tool:

https://gitlab.com/pixinsight/PCL/blob/master/src/modules/processes/NoiseGeneration/NoiseGeneratorInstance.cpp#L160

but I forgot it for some reason (getting old?), which shames me. As for the scaling of Gaussian noise, of course there is no reason to scale it if what we want is a simulation of read noise. My intention was to simulate how actual images look like, but this was not rigorous. And you're right, with these modifications there is no reason to preserve background levels because they don't change.

I have just changed the script with a modified version that fixes all of these problems. It surely could be improved; please feel free to do that if you want. Thank you for your help.

So IMO a noise-free sample image cannot simulate a typical raw image.

What I mean is that the noise-free image that I have posted can be used to simulate a typical raw frame by the addition of synthetic noise with the script. It is actually a small crop of a real raw frame, which I have denoised by removing wavelet layers.

EDIT - I have updated the script again to fix a mistake in the random Poisson deviates generator.
 
Thank you Juan, I appreciate your comments and updates.

Regarding your comment "My intention was to simulate how actual images look like, but this was not rigorous."

So that raises the question: Brighter structures have more shot noise, but yet they don't look noisier. Why?

IMO two reasons:

1) Brighter structures have larger SNR (ie noise is larger, but signal is larger even more). What we see is SNR. And so they don't look noisier, they look less noisy even though they a actually have more noise.

2) Stretching is done on dim structures, we don't stretch bright ones. So less chance of seeing the actual pixel variations due to noise in the bright areas.
 
1) Brighter structures have larger SNR (ie noise is larger, but signal is larger even more). What we see is SNR. And so they don't look noisier, they look less noisy even though they a actually have more noise.

2) Stretching is done on dim structures, we don't stretch bright ones. So less chance of seeing the actual pixel variations due to noise in the bright areas.

I agree completely. This is why I implemented an automatic linear mask generation feature in some of our noise reduction tools, such as MultiscaleLinearTransform and MultiscaleMedianTransform, which are intended to work on linear images.

On a linear image, an inverted duplicate of the image (with optional linear operations applied to control it, such as an amplification factor and a smoothing convolution) is a very effective and natural way to intensify noise reduction on the regions of the image where the noise is more visible, protecting bright structures. The reason is the correlation between SNR and brightness that you have described.

 
Back
Top