Hi Mike,
Comparisons between unscaled noise estimates from different images are generally meaningless. To compare noise values, they must be scaled first to make them statistically compatible. You can use the following procedure with simple JavaScript sentences executed from the Process Console window.
- Open the two images and, for the sake of simplicity, rename them to "A" and "B".
- Run the following commands from the console:
j var imageA = View.viewById( "A" ).imagej var imageB = View.viewById( "B" ).imageNow we can access the images directly through the imageA and imageB variables.
- Execute these commands to get the scaled noise estimates:
j imageA.noiseMRS()[0]/Math.sqrt( imageA.BWMV() )/0.991j imageB.noiseMRS()[0]/Math.sqrt( imageB.BWMV() )/0.991The estimates will be written to the console, expressed in units of statistical dispersion. Compare them with about three significant digits.
In the last two expressions above, I have used the square root of the
biweight midvariance as a scale estimate. You can use other robust estimates such as MAD, Qn or Sn, with similar results. Avoid using non-robust estimators such as the standard deviation, which may lead to wrong results.
I thought the lowest noise was the one with the smallest sK number, but I am confused as to why N is larger.
N is the number of pixels detected as pertaining to the noise by the MRS noise evaluation algorithm. The differences are not significant (they just indicate how the noise is distributed in each image), unless one of the images shows a very low value of N, say well below a 1%, in which case the noise estimate should be questioned.
The difference between these two images is just the settings used in Drizzle Integration and I want to know which settings to go with.
Comparison of (scaled) noise estimates can be one of the elements to guide your decisions. However, in the case of drizzle (and also in general image integrations) you should compare also PSF estimates with the DynamicPSF tool.