I'm working on a script that uses the PJSR ImageStatistics object, and to debug it I am comparing results with the 'Statistics' process applied to the same images. I am getting a few discrepancies and I'm not sure why. Attached is a side-by side comparison of the results from different images:
1. First (left side) I created a 10 wide x 1 high image and set the pixels to 0.0, 0.1 ... 0.9 so that I had a known starting point. The results are mostly what I would have expected.
If I uncheck the 'Normalized' and 'Unclipped' check boxes in the Statistics process, I get:
- Count 90% (rejects the pixel with a value of 0)
- Min 0.1, Max 0.9
- Median 0.5
2. I repeat the test with my script - pass the same image to an instance of the ImageStatistics object, I set the .rejectionLowEnabled and .rejectionHighEnabled properties = true, .rejectionLow = 0 and .rejectionHigh =1 . I assume that this will reproduce the behaviour of the Statistics process as described above. Indeed I get the same results for the three stats above and the others listed, so far so good, except for:
Problem 1: The sqrt of BWMV from the Statistics process = 0.27254 but from the ImageStatistics object = 0.26439. Why is this? Every other statistic (including all the options not shown in my summary image) produces matching results.
(Note, I am aware that the Statistics process generates the square root of BWMV and PBMV - it says that in the results, but not in the checkboxes which is potentially misleading - I am taking the square root of the BWMV and PBMV values generated by the ImageStatistics object so that isn't the problem).
Next I check the 'Unclipped' checkbox on the statistics process and get these results:
- Count 100% (includes the pixel with a value of 0)
- Min 0.0, Max 0.9
- Median 0.45
I repeat the test with my script this time .rejectionLowEnabled and .rejectionHighEnabled = false. I still set .rejectionLow = 0 and .rejectionHigh = 1 (just part of the script defaults). (Note: I assume the rejection threshold properties will be ignored since rejection has been disabled and that this should produce the same behaviour as the Statistics process with unclipped checked.) Indeed I get the same results from the Statistics process and the ImageStatistics object (as I would hope), except for sqrt(BWMV) which has a different value between the Statistics process and the ImageStatistics object. (0.29955 vs 0.29261) this is Problem 1 again.
I next repeated the same tests on a real image, Jellyfish nebula (right side of attachment), processed to the non-linear stage so pixels covering the whole 0 .. 1 real number range. I just present results for the Red channel in the interests of brevity. The results are again as I would expect, except the value of sqrt(BWMV) does not match between the Statistics process and the ImageStatistics object - Problem 1 again. (Note that the values for 'minimum' under the 'clipped' heading show as 0.0000 which looks like rejection is not working, but in fact the true values are just very small numbers greater than zero, so when displayed at this precision they appear to be zero).
3. Finally I performed the same set of tests on a linear non-debayered image of M33 (centre). I think the fact that this is a CFA image can be ignored in terms of causing problems - it's just a single channel greyscale image after all (and not a three-channel RGB CFA image). The tests this time produce some very odd-looking results - and I have double-checked and re-run the tests to make sure I haven't mixed up settings or images:
Running the tests with 'Unclipped' unchecked in the Statistics process, and for the ImageStatistics object .rejectionLowEnabled and .rejectionHighEnabled set to true with .rejectionLow = 0 and .rejectionHigh = 1. Thus I'd again expect to get the same results from the two methods:
- Count is 99.99966% with a minimum of 0.00190 and a maximum of 0.49729 in both cases, suggesting that the same set of pixels has been used by the process and the script.
- Problem 2: All of median, avgDev, MAD and sqrt(PBMV) are different between the process and the script. Why should this be? The other images produced matching results and so this test should too; I'm using the same code and same settings. I suspect the root cause of the problem lies with the Median, since if that is calculated to be different then avgDev, MAD, BWMV and PBMV would necessarily be different as they all rely on the value of Median.
- sqrt(BWMV) is different between the process and the script - can't tell if this is purely due to Problem 1 as above, or is it also due to Problem 2 (the difference in Median) as well as or instead of problem 1?
When I run the tests with 'Unclipped' checked or .rejectionLowEnabled and .rejectionHighEnabled set to false I get a similar result to the previous tests, i.e. all the figures match between the process and the script with the exception of sqrt(BWMV) which differs as before.
- What really doesn't make sense to me at all in this case is comparing the values of Median for the ImageStatistics object between the clipped and unclipped scenarios.
Clipped: Min = 0.00190, Max = 0.49729, Median = 0.00672
Unclipped: Min = 0.00000, Max = 0.49729, Median = 0.00722
This just looks wrong to me. In the unclipped scenario, we've added a bunch of pixels with a value of zero to the set yet the Median has increased when it should have decreased or stayed the same!!(Let's call this Problem 3.) You'll note that the maximum value is unchanged (and I have verified that it is the same to a precision of 17 digits). Thus the maximum cannot be pulling the Median to the high end of the set despite adding zeroes at the low end so what gives? In the case of the BWMV discrepancies I might accept 'rounding errors' of some sort, but Medians should not be subject to such problems since it's just picking from a set (or at worst averaging the two middle values of the set).
Any advice gratefully received!