Dear Steve,
Hi - I've done some experimenting with this in trying to combine data from three different scopes, each with their own cameras etc. I found that Image Integration was NOT the best way to combine the data.
Of the three cameras we had, one had a much lower gain (ADU/electron), and hence a lower signal. Image Integration gave those frames a much higher weight than the other two, despite the fact that the S/N was much the same.
What seemed to be happening was that since the signal was lower, the variance in the signal was a lower, which II took to mean that the image was higher quality (more photons) and hence increased the weight of those subs. This is just a guess, but even if this is not correct, the effect was the same - we were getting too high a weight from the cameras with the lower ADU/electron.
Instead, we developed an alternative approach - the idea is to make sure we equally weight each counted photon (electron). With this approach you don't have to take into account differences in exposure length, or quantum sensitivity since these show up in the counted photons in each sub. However, you do need to take account of the number of subs, the different gains, and the size of the sensors (described by the image scale, arcsec/pixel).
The last one is interesting and a side-effect of the way PI deals with aligning images with different scales. Imagine one camera has a scale of 2"/pixel, the second 1" per pixel and you have aligned the 2"/pixel to the 1" image (which preserves the detail where it exists). The 1"/pixel may have captured 10 photons. All other things being equal, the 2"/arcsec pixel would have captured 40 photons. When aligned to the 1"/pixel image, PI will effectively create 4 sub-pixels, but all with the a signal level corresponding to 40 photons (PI uses a form of interpolation). What this means is that you have to divide by the area of the pixel to take out this effect.
So our process was:
1) Everybody stacked their image with their own flats, lights, darks etc. as required, optimised their own stacking and produced a master.
2) The masters were then aligned to the highest resolution image so as not to lose the fine scale data where we had it.
3) We then combined the images in pixelmath, with a weight equal to: Nsubs/gain/PixelArea
Nsubs - the master contains the "average" number of ADU per sub - multiplying by Nsubs gives you the total ADU detected.
gain - in ADU/electron - dividing the signal by the gain converts to electrons and hence detected photons (if you have a gain in electrons/adu, then multiply by this instead).
PixelArea - as discussed above. Don't forget to take account of binning (if not 1x1) in determining the area.
This definitely produced better images for us, though since we all processed the images separately, it was obvious that the skill of the processor was more important than the marginal gain we got from optimising the stacking process (and sadly, our best processor used PhotoShop and couldn't really tell us how to improve our processing in PI!).
Hope this helps,
Colin