Trying to understand how the FastIntegration stacking order effects final image

deep_space_dave

Active member
Hi All,

I have been testing out various ways to optimize FastIntegration for my datasets. Since it does not support image weights, I was trying to see if it would be possible to sort my images by PSF Signal Weight from best to worst, then apply a weight the prefix of every file to force the best to have a smaller number (larger weight) and the worst to have a larger number (smaller weight) so that FastIntegration processes the images from best to worst. To do this I basically inverted the PSFSignalWeight but dividing it by 1 then use that is my weight number. From my understanding, the FastIntegration processes the images in sequence. So in thinking about how Winsorized Sigma clipping works, the best images would get integrated first and then as the images get worse in quality it would stack the good data and reject the worsening outliers.

Please advise if this logic is sound or incorrect. Or even if this is a waste of time and order does not matter?

Thanks!
Dave
 
From the FastIntegration documentation:
The algorithm relies on the assumption that the images are arranged chronologically, and thus, that the shift
between one image and the next is small, even if the difference with the reference image is significant; therefore,
for each image present in the batch, an image that already has a solution is searched, starting from the closest
ones.
So reordering the images will probably break the algorithm.
 
From the FastIntegration documentation:

So reordering the images will probably break the algorithm.
Yes this is what I have read as well but in some of my testing, it seems like at least reordering images by quality seems to improve the outcome. I even the outer edges of bloated stars in the rejections. The original idea I discovered by accident when I was trying to reduce images with high background brightness, I ended up putting the darker but yet better PSFs at the beginning. I will be doing more testing to see if I am just biased or if there really is an improvement :)
 
So reordering the images will probably break the algorithm.
It relies on order to minimise the need for full registration. If you system has good tracking and no field rotation (sorry, SeeStarS50 users), the order probably doesn't matter that much.
 
Changing the order did not affect the registration of my images so I just did a test that basically answers my question and curiosity.

As you can see in the screenshot below the difference the caputed sequence order and the reordered by PSF signal weight is almost Null. There is a extremely subtle improvement in the image that has the subframes ordered by PSF signal weight but not enough to warrant all the extra effort.
The images were only gradient corrected and color corrected then the same screen stretch applied to reduce the brightness to see the details.
1710964394547.png


Like they say "a picture is worth a thousand words" I hope that this demonstration helps others in clarifying that stacking order with FastIntegration only matters for registration but the rejection algo does not care. FastIntegration stacked about 5,000 subframes in about an hour so either way this is a fantastic tool!

-Dave
 
It relies on order to minimise the need for full registration. If you system has good tracking and no field rotation (sorry, SeeStarS50 users), the order probably doesn't matter that much.
Funny you mentioned the SeeStarS50's field rotation as my buddy CuivTheLazyGeek just made a video about how he figured out how to use it in EQ mode :)
 
Hi All,

I have been testing out various ways to optimize FastIntegration for my datasets. Since it does not support image weights, I was trying to see if it would be possible to sort my images by PSF Signal Weight from best to worst, then apply a weight the prefix of every file to force the best to have a smaller number (larger weight) and the worst to have a larger number (smaller weight) so that FastIntegration processes the images from best to worst. To do this I basically inverted the PSFSignalWeight but dividing it by 1 then use that is my weight number. From my understanding, the FastIntegration processes the images in sequence. So in thinking about how Winsorized Sigma clipping works, the best images would get integrated first and then as the images get worse in quality it would stack the good data and reject the worsening outliers.

Please advise if this logic is sound or incorrect. Or even if this is a waste of time and order does not matter?

Thanks!
Dave
Hi,
Pixel rejection happens amongst the frames belonging to the same batch. This means that if your batch size is 20 (to say a number), the first 20 images will be stacked together, and pixels will be rejected, considering only the statistical distribution of 20 values for each pixel stack coming from these 20 images. Then, in the next batch, the same will occur, and so on, until all images are stacked.

Grouping images with similar weight reduces the chances of pixels being rejected, assuming that "similar weight means similar statistical data distribution within images (excluding outliers from trails or hot pixels).
So, generally speaking, I would conclude that the strategy you're using leads to lower overall rejection.

The best would be to have a batch that contains good images on average and few bad images, which means that each time a batch gets integrated, the bad images are more willing to contain pixels that will be rejected because their pixels will be likely classified as outliers.

The difference you may see by sorting images differently comes from the fact that each batch will contain different images, unavoidably leading to a result that is different to a certain degree. It's not necessarily a result that you can extend generically, and I tend to assume that this difference is negligible unless the image qualities differ a lot. In this case, the effective way to resort to them to get a better result may be investigated.

On the other hand, sorting the images based on any criteria requires the pipeline to be completely changed to read and measure all of them and then read back them all in the new order to continue the registration and stacking. This may strongly reduce the overall execution time and must be justified by a significant difference in the outcoming result; otherwise, the loss of performance is not worth it.
 
Hi,
Pixel rejection happens amongst the frames belonging to the same batch. This means that if your batch size is 20 (to say a number), the first 20 images will be stacked together, and pixels will be rejected, considering only the statistical distribution of 20 values for each pixel stack coming from these 20 images. Then, in the next batch, the same will occur, and so on, until all images are stacked.

Grouping images with similar weight reduces the chances of pixels being rejected, assuming that "similar weight means similar statistical data distribution within images (excluding outliers from trails or hot pixels).
So, generally speaking, I would conclude that the strategy you're using leads to lower overall rejection.

The best would be to have a batch that contains good images on average and few bad images, which means that each time a batch gets integrated, the bad images are more willing to contain pixels that will be rejected because their pixels will be likely classified as outliers.

The difference you may see by sorting images differently comes from the fact that each batch will contain different images, unavoidably leading to a result that is different to a certain degree. It's not necessarily a result that you can extend generically, and I tend to assume that this difference is negligible unless the image qualities differ a lot. In this case, the effective way to resort to them to get a better result may be investigated.

On the other hand, sorting the images based on any criteria requires the pipeline to be completely changed to read and measure all of them and then read back them all in the new order to continue the registration and stacking. This may strongly reduce the overall execution time and must be justified by a significant difference in the outcoming result; otherwise, the loss of performance is not worth it.
Hi Robyx,

Yes I totally agree as even did a test in this thread that shows that there is no benefit to sorting. One question I do have is how much of an impact does the reference image have on the image stack? Is it only used for star alignment or does it affect the quality of the rest of the stack such as background brightness and/or star shape?
 
Hi Robyx,

Yes I totally agree as even did a test in this thread that shows that there is no benefit to sorting. One question I do have is how much of an impact does the reference image have on the image stack? Is it only used for star alignment or does it affect the quality of the rest of the stack such as background brightness and/or star shape?
The reference image is used to register the images and as a reference for global normalization.
Being a global normalization (additive and multiplicative coefficients applied to all stacked images), it only impacts the average level of the image and its global scale, which basically has no impact on the overall quality, but it's essential to have a statistically compatible data between the images in a batch to perform a meaningful rejection.
 
Back
Top