Author Topic: PJSR ImageStatistics vs Statistics process results  (Read 7442 times)

Offline IanL

  • PixInsight Addict
  • ***
  • Posts: 116
    • The Imaging Toolbox
PJSR ImageStatistics vs Statistics process results
« on: 2014 December 07 08:05:12 »
I'm working on a script that uses the PJSR ImageStatistics object, and to debug it I am comparing results with the 'Statistics' process applied to the same images.  I am getting a few discrepancies and I'm not sure why.  Attached is a side-by side comparison of the results from different images:

1. First (left side) I created a 10 wide x 1 high image and set the pixels to 0.0, 0.1 ... 0.9 so that I had a known starting point.  The results are mostly what I would have expected.

If I uncheck the 'Normalized' and 'Unclipped' check boxes in the Statistics process, I get:

- Count 90% (rejects the pixel with a value of 0)
- Min 0.1, Max 0.9
- Median 0.5

2. I repeat the test with my script - pass the same image to an instance of the ImageStatistics object, I set the .rejectionLowEnabled and .rejectionHighEnabled properties = true, .rejectionLow = 0 and .rejectionHigh =1 .  I assume that this will reproduce the behaviour of the Statistics process as described above.  Indeed I get the same results for the three stats above and the others listed, so far so good, except for:

Problem 1:  The sqrt of BWMV from the Statistics process = 0.27254 but from the ImageStatistics object = 0.26439.  Why is this? Every other statistic (including all the options not shown in my summary image) produces matching results.

(Note, I am aware that the Statistics process generates the square root of BWMV and PBMV - it says that in the results, but not in the checkboxes which is potentially misleading - I am taking the square root of the BWMV and PBMV values generated by the ImageStatistics object so that isn't the problem).

Next I check the 'Unclipped' checkbox on the statistics process and get these results:

- Count 100% (includes the pixel with a value of 0)
- Min 0.0, Max 0.9
- Median 0.45

I repeat the test with my script this time .rejectionLowEnabled and .rejectionHighEnabled = false.  I still set .rejectionLow = 0 and .rejectionHigh = 1 (just part of the script defaults).  (Note: I assume the rejection threshold properties will be ignored since rejection has been disabled and that this should produce the same behaviour as the Statistics process with unclipped checked.) Indeed I get the same results from the Statistics process and the ImageStatistics object (as I would hope), except for sqrt(BWMV) which has a different value between the Statistics process and the ImageStatistics object.  (0.29955 vs 0.29261) this is Problem 1 again.

I next repeated the same tests on a real image, Jellyfish nebula (right side of attachment), processed to the non-linear stage so pixels covering the whole 0 .. 1 real number range.  I just present results for the Red channel in the interests of brevity.  The results are again as I would expect, except the value of sqrt(BWMV) does not match between the Statistics process and the ImageStatistics object - Problem 1 again.  (Note that the values for 'minimum' under the 'clipped' heading show as 0.0000 which looks like rejection is not working, but in fact the true values are just very small numbers greater than zero, so when displayed at this precision they appear to be zero).

3. Finally I performed the same set of tests on a linear non-debayered image of M33 (centre).  I think the fact that this is a CFA image can be ignored in terms of causing problems - it's just a single channel greyscale image after all (and not a three-channel RGB CFA image).  The tests this time produce some very odd-looking results - and I have double-checked and re-run the tests to make sure I haven't mixed up settings or images:

Running the tests with 'Unclipped' unchecked in the Statistics process, and for the ImageStatistics object .rejectionLowEnabled and .rejectionHighEnabled set to true with .rejectionLow = 0 and .rejectionHigh = 1.  Thus I'd again expect to get the same results from the two methods:

- Count is 99.99966% with a minimum of 0.00190 and a maximum of 0.49729 in both cases, suggesting that the same set of pixels has been used by the process and the script.

- Problem 2: All of median, avgDev, MAD and sqrt(PBMV) are different between the process and the script.  Why should this be? The other images produced matching results and so this test should too; I'm using the same code and same settings.  I suspect the root cause of the problem lies with the Median, since if that is calculated to be different then avgDev, MAD, BWMV and PBMV would necessarily be different as they all rely on the value of Median.

- sqrt(BWMV) is different between the process and the script - can't tell if this is purely due to Problem 1 as above, or is it also due to Problem 2 (the difference in Median) as well as or instead of problem 1?

When I run the tests with 'Unclipped' checked or .rejectionLowEnabled and .rejectionHighEnabled set to false I get a similar result to the previous tests, i.e. all the figures match between the process and the script with the exception of sqrt(BWMV) which differs as before.

- What really doesn't make sense to me at all in this case is comparing the values of Median for the ImageStatistics object between the clipped and unclipped scenarios.

Clipped: Min = 0.00190, Max = 0.49729, Median = 0.00672
Unclipped: Min = 0.00000, Max = 0.49729, Median = 0.00722

This just looks wrong to me.  In the unclipped scenario, we've added a bunch of pixels with a value of zero to the set yet the Median has increased when it should have decreased or stayed the same!!(Let's call this Problem 3.)  You'll note that the maximum value is unchanged (and I have verified that it is the same to a precision of 17 digits).  Thus the maximum cannot be pulling the Median to the high end of the set despite adding zeroes at the low end so what gives? In the case of the BWMV discrepancies I might accept 'rounding errors' of some sort, but Medians should not be subject to such problems since it's just picking from a set (or at worst averaging the two middle values of the set).

Any advice gratefully received!


« Last Edit: 2014 December 07 08:43:11 by IanL »

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: PJSR ImageStatistics vs Statistics process results
« Reply #1 on: 2014 December 07 12:13:41 »
Quote
Problem 1:  The sqrt of BWMV from the Statistics process = 0.27254 but from the ImageStatistics object = 0.26439.  Why is this? Every other statistic (including all the options not shown in my summary image) produces matching results.

The ImageStatistics object is a bit 'old' and still computes the biweight midvariance with respect to a normalized MADN center value, MADN = 1.4826*MAD. However, the PixInsight Core application (and hence the Statistics tool, which calls the core to compute image properties) as well as the Image.BWMV() method compute the biweight midvariance with respect to MAD (as described e.g. in Wilcox's Introduction to Robust Estimation and Hypothesis Testing).

Instead of the ImageStatistics object, I strongly recommend you use the set of statistical Image methods:

Number Image.avgDev( [Number center=median()[, Rect rect=0[, int firstChannel=-1[, int lastChannel=-1]]]] )
Number Image.BWMV( [Number center=median()[, Number sigma=MAD()[, int k=9[, Rect rect=0[, int firstChannel=-1[, int lastChannel=-1]]]]]] )
Number Image.MAD( [Number center=median()[, Rect rect=0[, int firstChannel=-1[, int lastChannel=-1]]]] )
Number|Complex Image.maximum( [Rect rect=0[, int firstChannel=-1[, int lastChannel=-1]]] )
Point Image.maximumPosition( [Rect rect=0[, int firstChannel=-1[, int lastChannel=-1]]] )
Number Image.mean( [Rect rect=0[, int firstChannel=-1[, int lastChannel=-1]]] )
Number Image.meanOfSquares( [Rect rect=0[, int firstChannel=-1[, int lastChannel=-1]]] )
Number Image.median( [Rect rect=0[, int firstChannel=-1[, int lastChannel=-1]]] )
Number|Complex Image.minimum( [Rect rect=0[, int firstChannel=-1[, int lastChannel=-1]]] )
Point Image.minimumPosition( [Rect rect=0[, int firstChannel=-1[, int lastChannel=-1]]] )
Number Image.modulus( [Rect rect=0[, int firstChannel=-1[, int lastChannel=-1]]] )
Number Image.norm( [Rect rect=0[, int firstChannel=-1[, int lastChannel=-1]]] )
Number Image.PBMV( [Number center=median()[, Number beta=0.2[, Rect rect=0[, int firstChannel=-1[, int lastChannel=-1]]]]] )
Number Image.Qn( [Rect rect=0[, int firstChannel=-1[, int lastChannel=-1]]] )
Number Image.Sn( [Rect rect=0[, int firstChannel=-1[, int lastChannel=-1]]] )
Number Image.stdDev( [Rect rect=0[, int firstChannel=-1[, int lastChannel=-1]]] )
Number Image.sumOfSquares( [Rect rect=0[, int firstChannel=-1[, int lastChannel=-1]]] )


As you see, these methods allow you to specify central values and reference dispersion estimates, with appropriate default values. The rect, firstChannel and lastChannel parameters allow you to define a subset of pixel samples for statistical calculations. When rect=0 (the default value), the current rectangular selection in the image, namely:

Rect Image.selectedRect

is used. When firstChannel=-1 (default value), the current first selected channel is used:

int Image.firstSelectedChannel

Similarly, if lastChannel=-1, the last selected channel is used:

int Image.lastSelectedChannel

You can control clipping with the following properties of the Image object:

Number Image.rangeClipHigh
Number Image.rangeClipLow
Boolean Image.rangeClippingEnabled


Range clipping is disabled by default.

Also take into account that by default the Statistics tool computes normalized dispersion values, that is, it computes the following normalized values:

avgDev * 1.2533
MAD    * 1.4826
BWMV   * 0.9901
PBMV   * 0.9709
Sn     * 1.1926
Qn     * 2.2191


Normalization is used to show values compatible with the standard deviation of a normal distribution (that is, for purely Gaussian-distributed data, these values should be equal (or almost) to the standard deviation). You can control normalization by enabling/disabling the corresponding checkbox on Statistics. Of course, the PJSR routines give unnormalized estimates.

Quote
- Problem 2: All of median, avgDev, MAD and sqrt(PBMV) are different between the process and the script.  Why should this be? The other images produced matching results and so this test should too; I'm using the same code and same settings.

I can't say anything useful without taking a look at the image. Besides the different reference dispersion value used by BWMV, a possible cause might be different image selections. ImageStatistics.generate() and ImageStatistics( Image ) work on the current selections of the image. In all our tests the results of Statistics and PJSR/PCL routines are the same (not surprising since all variants ultimately use the same PCL routines).

Quote
Clipped: Min = 0.00190, Max = 0.49729, Median = 0.00672
Unclipped: Min = 0.00000, Max = 0.49729, Median = 0.00722

This is indeed very strange. I have made a lot of tests with different data sets and cannot reproduce this problem. Could you upload the image?


Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline IanL

  • PixInsight Addict
  • ***
  • Posts: 116
    • The Imaging Toolbox
Re: PJSR ImageStatistics vs Statistics process results
« Reply #2 on: 2014 December 07 14:47:01 »
Instead of the ImageStatistics object, I strongly recommend you use the set of statistical Image methods:

Thanks Juan, should be easy enough to substitute these methods in the script, I'm already using them for a couple of stats that aren't available in the ImageStatistics object, just hadn't appreciated that the latter wasn't as up to date.

Quote
I can't say anything useful without taking a look at the image. Besides the different reference dispersion value used by BWMV, a possible cause might be different image selections. ImageStatistics.generate() and ImageStatistics( Image ) work on the current selections of the image. In all our tests the results of Statistics and PJSR/PCL routines are the same (not surprising since all variants ultimately use the same PCL routines).

The same channel and the whole image are used in both cases, so I guess it will be the different reference dispersion value that is the cause, again should be easy to fix taking the step above.

Quote
Clipped: Min = 0.00190, Max = 0.49729, Median = 0.00672
Unclipped: Min = 0.00000, Max = 0.49729, Median = 0.00722
This is indeed very strange. I have made a lot of tests with different data sets and cannot reproduce this problem. Could you upload the image?

I'll try and upload it tomorrow - I can't rule out human error but I did repeat the tests several times so I'm fairly sure the result is correct.  To be clear, this is only a problem with the ImageStatistics object for this one image; the other two tests performed as expected so it does seem rather odd.  I can avoid it by making the changes suggested above but would be curious to know why it happened as it does seem odd.

Offline IanL

  • PixInsight Addict
  • ***
  • Posts: 116
    • The Imaging Toolbox
Re: PJSR ImageStatistics vs Statistics process results
« Reply #3 on: 2014 December 08 11:55:55 »
You can control clipping with the following properties of the Image object:

Number Image.rangeClipHigh
Number Image.rangeClipLow
Boolean Image.rangeClippingEnabled


How do I get the count of pixel samples included in the range?  The ImageStatistics object has the .count property for this purpose.  I also see in the PCL documentation that GenericImage has a Count method for the same purpose, but it doesn't appear to be exposed as an Image.count() method in PJSR (not listed in the Object Exlporer and fails if tried in a script).

Offline mschuster

  • PTeam Member
  • PixInsight Jedi
  • *****
  • Posts: 1087
Re: PJSR ImageStatistics vs Statistics process results
« Reply #4 on: 2014 December 09 19:45:04 »
Bug on Win 7, 01.08.03.1123?

Image statistics range clipping appears to be sticky across script executions on open view.images.

First PJSR execution sets range low, high, and enabled. Sn() returns correct value.

Second PJSR execution does not set range low, high, and enabled (i.e. leaves them defaulted). Sn() returns value equal to first execution rather than the range clipping disabled value.

Mike

Offline IanL

  • PixInsight Addict
  • ***
  • Posts: 116
    • The Imaging Toolbox
Re: PJSR ImageStatistics vs Statistics process results
« Reply #5 on: 2014 December 10 01:12:12 »
Bug on Win 7, 01.08.03.1123?

Image statistics range clipping appears to be sticky across script executions on open view.images.

First PJSR execution sets range low, high, and enabled. Sn() returns correct value.

Second PJSR execution does not set range low, high, and enabled (i.e. leaves them defaulted). Sn() returns value equal to first execution rather than the range clipping disabled value.

Mike

Mike,

Does it unstick if you close the image window and open a new one from file?

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: PJSR ImageStatistics vs Statistics process results
« Reply #6 on: 2014 December 10 01:29:48 »
Quote
Image statistics range clipping appears to be sticky across script executions on open view.images.

This is made on purpose. Each View object keeps the current selections of its image. This includes the following properties:

int Image.firstSelectedChannel
int Image.lastSelectedChannel
Number Image.rangeClipHigh
Number Image.rangeClipLow
Boolean Image.rangeClippingEnabled
int Image.selectedChannel
Point Image.selectedPoint
Rect Image.selectedRect
Number Image.selectionPoint


For detailed information on these properties, see the PCL documentation for the ImageSelections and AbstractImage classes.

Images remember their selections because this allows working with images from the command line. For example, you can enter the following commands from the Process Console window:

j var image = ImageWindow.activeWindow.mainView.image
j image.resetSelections()
j image.rangeClipLow = 0.1
j image.rangeClipHigh = 0.9
j image.rangeClippingEnabled = true
j image.MAD()
j image.avgDev()
j image.selectedChannel = 1
j image.MAD()
j image.avgDev()


With this sequence we first reset all image selections to default values. Then we define a clipping range and enable it. Finally we calculate MAD and avgDev first for the whole image, and then for the green channel only.

A script which needs to work with default image selections should call Image.resetSelections() before starting to work with an image, as shown above.

Note that selections are mutable objects. This means that they can be changed without notifying the platform with View.beginProcess() ... View.endProcess().

Note also that all statistical methods of the Image object allow you to define optional rectangular and channel selections on each call. For example:

Number Image.median( [Rect rect=0[, int firstChannel=-1[, int lastChannel=-1]]] )

If no selections are specified the methods work with the current selections of the image. Range clipping selections must be specified by setting the rangeClipLow, rangeClipHigh and rangeClippingEnabled properties before calling these methods.

Finally, PJSR selections are completely different and have no influence on image selections seen and used by modules and the core application. This happens because PCL and PJSR are two independent abstraction layers.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: PJSR ImageStatistics vs Statistics process results
« Reply #7 on: 2014 December 10 01:33:41 »
Does it unstick if you close the image window and open a new one from file?

Yes. When you close an image window, all of the PJSR objects associated with the window, starting with the corresponding ImageWindow object, are destroyed (and, eventually, garbage-collected). When you open the image again, a new ImageWindow object is newly created with default image selection properties.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline IanL

  • PixInsight Addict
  • ***
  • Posts: 116
    • The Imaging Toolbox
Re: PJSR ImageStatistics vs Statistics process results
« Reply #8 on: 2014 December 10 02:55:18 »
You can control clipping with the following properties of the Image object:

Number Image.rangeClipHigh
Number Image.rangeClipLow
Boolean Image.rangeClippingEnabled


How do I get the count of pixel samples included in the range?  The ImageStatistics object has the .count property for this purpose.  I also see in the PCL documentation that GenericImage has a Count method for the same purpose, but it doesn't appear to be exposed as an Image.count() method in PJSR (not listed in the Object Exlporer and fails if tried in a script).

Juan, sorry to hassle, but any chance of an answer to this? I'm currently having to set up an ImageStatistics object and clipping just to get the count of samples included which is a bit of a waste. Also the Image.variance() method doesn't exist. Not a big deal as I can square the StdDev but as a completist it bugs me!

Offline mschuster

  • PTeam Member
  • PixInsight Jedi
  • *****
  • Posts: 1087
Re: PJSR ImageStatistics vs Statistics process results
« Reply #9 on: 2014 December 10 07:05:59 »
Quote
A script which needs to work with default image selections should call Image.resetSelections() before starting to work with an image, as shown above.

Thank you Juan. I think several of my scripts use image statistics and other selection sensitive methods, assume unclipped, but don't call resetSelections. I will check.

Mike

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: PJSR ImageStatistics vs Statistics process results
« Reply #10 on: 2014 December 10 11:09:48 »
How do I get the count of pixel samples included in the range?

There should be an Image.count() method. I'll include it in the next version. In the meanwhile, here is a JavaScript implementation that you can use in your scripts:

Code: [Select]
Image.prototype.count = function( rect, firstChannel, lastChannel )
{
   var count = 0;
   
   this.pushSelections();

   if ( rect != undefined && rect.isRect )
      this.selectedRect = rect;
   if ( firstChannel != undefined && firstChannel >= 0 )
      this.firstSelectedChannel = firstChannel;
   if ( lastChannel != undefined && lastChannel >= 0 )
      this.lastSelectedChannel = lastChannel;
   
   if ( this.rangeClippingEnabled )
   {
      var x = new Vector;
      this.initPixelIterator();
      do
      {
         this.getPixelValue( x );
         for ( var i = 0; i < x.length; ++i )
         {
            var v = x.at( i );
            if ( v > this.rangeClipLow && v < this.rangeClipHigh )
               ++count;
         }
      }
      while ( this.nextPixel() );
   }
   else
   {
      count = this.numberOfSelectedSamples;
   }

   this.popSelections();

   return count;
}

And an example:

var image = ImageWindow.activeWindow.currentView.image;
image.resetSelections();
image.rangeClipLow = 0.1;
image.rangeClipHigh = 0.75;
image.rangeClippingEnabled = true;
console.writeln( "count = ", image.count() );


Quote
Also the Image.variance() method doesn't exist. Not a big deal as I can square the StdDev but as a completist it bugs me!

Here we go:

Code: [Select]
Image.prototype.variance = function( rect, firstChannel, lastChannel )
{
   if ( lastChannel == undefined )
      lastChannel = -1;
   if ( firstChannel == undefined )
      firstChannel = -1;
   if ( rect == undefined )
      rect = new Rect;
   var sigma = this.stdDev( rect, firstChannel, lastChannel );
   return sigma*sigma;
}

Example (continued from the previous example):

console.writeln( format( "variance = %.7g", image.variance() ) );
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline IanL

  • PixInsight Addict
  • ***
  • Posts: 116
    • The Imaging Toolbox
Re: PJSR ImageStatistics vs Statistics process results
« Reply #11 on: 2014 December 11 00:54:07 »
Thanks for this Juan, far more than I was expecting and much appreciated.

Offline mschuster

  • PTeam Member
  • PixInsight Jedi
  • *****
  • Posts: 1087
Re: PJSR ImageStatistics vs Statistics process results
« Reply #12 on: 2014 December 11 07:12:55 »
Juan, are statistics normalization factors available? Can't find them. Would be nice to have. I coded my own for Sn() from original paper. Also, unrelated, I am using image.setPixels(array, [rect]) on Grayscale image but it's definition does not appear in Object Explorer. Deprecated now?

Mike