OK. The difference in image scale, pixel size etc., is automatically taken care of?
A resize can invalidate the stored NOISExx data. A thought experiment, using a camera with negligible read noise, illustrates this:
Suppose we took a couple of 1 minute images in identical conditions. One used 1x1 binning, the other 2x2 binning. Both images detected approximately the same number of photons, and so they should be weighted the same. However,
the noise evaluation is per pixel, and not per solid angle of the sky (see
https://en.wikipedia.org/wiki/Solid_angle). The noise estimate for the 2x2 binned image will be different from the 1x1 image. Each pixel will have four times the signal and two times the noise (the square root of the signal).
We have calibrated both the 1x1 and the 2x2 images, and their noise estimate has been written into the FITS header.
The noise estimate is now fixed. We now register the 1x1 image to the 2x2 image. So now both images are 'binned' 2x2. However, if we now use NOISExx to calculate the signal to noise ratio, we will get different answers for the two images.
This will affect all algorithms that use NOISExx headers. This is not a criticism of NOISExx. They normally work extremely well. What would happen in this situation if we did not use NOISExx? Calculating the noise on an up scaled image would be very problematic, because the original lower resolution has in effect smoothed the noise. The new noise estimation would not be comparable with the noise estimate of the higher resolution images.
It should be possible to compensate for this by either multiplying the resized NOISExx headers by a correction factor, or by using PixelMath to multiply the resized images by a correction factor. Multiplying the image works because although it affects the signal and noise equally, the NOISExx headers are fixed and don't know about the change.