Stacking images from Mono and OSC cameras

chandrainsky

Well-known member
Want to know if it is possible to stack images from Mono and OSC cameras in the stacking process. In my case ZWO1600 Mono and ZWO2600 OSC cameras. If it is supported in PI are there specific settings during stacking? Or we do the processing independently and merge the images later? Even if it is the latter, I would guess there should be some settings somewhere since the pixel size, number of pixels etc., is all very different apart from the fact that one is mono and the other is not.
 
I did something similar with a DSLR and mono RGB. Workflow for DSLR was IC ---> CC ---> Debayer and then with the mono IC ---> CC. Then StarAlignment for everything, NSG script and II for DSLR and each filter. Split the DSLR into RGB and then combine the separated DSLR_RGB channels with the MONO_RGB channels. (I used PM with something like "med(DSLR_R, MONO_R)" and finally ChannelCombination....(I think this is what I did)
 
I did something similar with a DSLR and mono RGB. Workflow for DSLR was IC ---> CC ---> Debayer and then with the mono IC ---> CC. Then StarAlignment for everything, NSG script and II for DSLR and each filter. Split the DSLR into RGB and then combine the separated DSLR_RGB channels with the MONO_RGB channels. (I used PM with something like "med(DSLR_R, MONO_R)" and finally ChannelCombination....(I think this is what I did)
OK. The difference in image scale, pixel size etc., is automatically taken care of?
 
With StarAlignment, all those things are taken care of. I think Binning is sorted out as well ... I don't Bin any images, so not sure.
 
OK. The difference in image scale, pixel size etc., is automatically taken care of?
A resize can invalidate the stored NOISExx data. A thought experiment, using a camera with negligible read noise, illustrates this:

Suppose we took a couple of 1 minute images in identical conditions. One used 1x1 binning, the other 2x2 binning. Both images detected approximately the same number of photons, and so they should be weighted the same. However, the noise evaluation is per pixel, and not per solid angle of the sky (see https://en.wikipedia.org/wiki/Solid_angle). The noise estimate for the 2x2 binned image will be different from the 1x1 image. Each pixel will have four times the signal and two times the noise (the square root of the signal).

We have calibrated both the 1x1 and the 2x2 images, and their noise estimate has been written into the FITS header. The noise estimate is now fixed. We now register the 1x1 image to the 2x2 image. So now both images are 'binned' 2x2. However, if we now use NOISExx to calculate the signal to noise ratio, we will get different answers for the two images.

This will affect all algorithms that use NOISExx headers. This is not a criticism of NOISExx. They normally work extremely well. What would happen in this situation if we did not use NOISExx? Calculating the noise on an up scaled image would be very problematic, because the original lower resolution has in effect smoothed the noise. The new noise estimation would not be comparable with the noise estimate of the higher resolution images.

It should be possible to compensate for this by either multiplying the resized NOISExx headers by a correction factor, or by using PixelMath to multiply the resized images by a correction factor. Multiplying the image works because although it affects the signal and noise equally, the NOISExx headers are fixed and don't know about the change.
 
I did something similar with a DSLR and mono RGB. Workflow for DSLR was IC ---> CC ---> Debayer and then with the mono IC ---> CC. Then StarAlignment for everything, NSG script and II for DSLR and each filter. Split the DSLR into RGB and then combine the separated DSLR_RGB channels with the MONO_RGB channels. (I used PM with something like "med(DSLR_R, MONO_R)" and finally ChannelCombination....(I think this is what I did)
I tried with the R channel from 2600 MC Pro and Ha from 1600MM. Got a funny looking result after running the same thing as you mentioned in PM :)
 

Attachments

  • Combine 1600 and 2600.JPG
    Combine 1600 and 2600.JPG
    48.5 KB · Views: 90
It looks like the images did not align correctly...
Can you elaborate on the expected workflow please? Never tried this before, so not sure if I am missing something very basic. The following is what I did:
1. Extract RGB channels from the stacked RGB image from 2600MC Pro
2. Processed the Ha image from 1600mm Pro
3. Aligned the R channel from step 1 above and Ha from step 2 above using Registration in PI
4. Stacked the aligned images in PI
 
A resize can invalidate the stored NOISExx data. A thought experiment, using a camera with negligible read noise, illustrates this:
...
Multiplying the image works because although it affects the signal and noise equally, the NOISExx headers are fixed and don't know about the change.

But this information is known after registration...correct? The transformation matrix encodes the change of scale compared to the reference frame?
As you can see a more common issue than you might initially imagine.. :)
-adam
 
Back
Top