Weighting and normalising images from different cameras

AstrGerdt

Well-known member
Hello,

I could use a little help on how to combine images from two different DSLRs.

I am shooting with a Canon EOS 200d and a Nikon D7500. Both have different resolutions and sensor sizes. On the 200d I shoot at ISO 800 to 1600, on the D7500 I shoot at ISO 400. The 200d has an offset of 2000 ADU, the D7500 of 400 ADU.

My problem occurs when weighting the images.

Pixinsight weights the 200d images significantly higher/better than the D7500. When in fact the D7500 has lower readout noise, lower dark current and higher quantum efficiency, thus produces the better images.

My explanation for this is that the 200d's images seem to have more signal due to the higher offset and are therefore weighted better.

But it is not enough as a solution to simply correct the difference in offset, there also has to be gain/ISO taken into account.

Is there any way that Pixinsight can compensate for both additive and multiplicative effects at the same time when weighting and also normalizing before stacking? Or do I have to approach this completely differently?

CS Gerrit
 
Both have different resolutions and sensor sizes.

Then for the same optics and acquisition conditions, the fields of view and the represented stars can be significantly different in frames acquired with both cameras. In such case you cannot use the PSF Signal Weight and PSF SNR estimators to compare images acquired with both cameras. This is because these estimators are absolute and depend on the star populations present in each measured frame, as well as on measured PSF dimensions and shapes on each frame (the latter especially for PSFSW). If these populations and/or parameters are significantly different for reasons other than natural or focus variations, the estimators will provide incompatible values.

For comparisons of incompatible images in terms of FOV and pixel dimensions, the best option is the PSF Scale SNR estimator. This is a relative estimator based on robust noise and scale evaluation. Although differences in PSF shapes and dimensions can have a small impact also in this case (especially if PSF modeling is much better for one of the cameras because of a much higher resolution), PSF Scale SNR is robust for this application, provided you use the same normalization reference frame for the entire data set.

My explanation for this is that the 200d's images seem to have more signal due to the higher offset and are therefore weighted better.

Not at all. All of the estimators based on PSF evaluation are completely independent of additive terms. I assume you are using the latest version 1.8.9-1 of PixInsight. The differences you are observing are described and justified in the preceding paragraphs.

Is there any way that Pixinsight can compensate for both additive and multiplicative effects at the same time when weighting and also normalizing before stacking?

As noted, additive terms are irrelevant for image weighting. Our LocalNormalization process compensates for both additive and multiplicative differences among frames by means or locally adaptive linear normalization functions.
 
Hi Juan,

thanks for your (as always) competent and fast answer. That clarified a lot of things for me.

Tonight I am going to take some more images under better circumstances, then I will try your solution.

CS Gerrit
 
Hi Juan,

after reading about the PSF Scale SNR estimator, I still don't fully understand how that metric can take into account additive differences between the Images.

In this post where you announced the new estimate, you wrote:
where
svg.image
is the relative scale factor computed by the LocalNormalization process and
svg.image
is the standard deviation of the noise. The relative scale factor is the ratio of mean PSF flux estimates for the normalization reference and target images

While it makes perfect sense to me that there must be some kind of adaption to take into account brightness differences of the images, I can't see how a ratio can take into account additive brightness differences between two frames.

Let's say image one has an offset of 100 and a median of 500. Image two was taken with another gain (leading to more ADU/e-). With an offset of 50. Image two has a Median of 1000.

In this example, it is not enough to match the average brightness by just multiplying image one by a factor of 2. There must also be a subtraction of the different offset. And I can't see how a single scale factor can take that into account.

To be fair, I don't know really much about the current version of LN. Most things I can find about LN still refer to the old version, which seemed to take only additive effects into account.

Am I getting something wrong, or do I just misunderstand what LN does?

CS, and thanks for your Help
Gerrit
 
Back
Top