Subframe selection processes

Masahiko

Member
Hi.

I would like to know when you select good subframes and remove subframes with low quality .
My flow is as follows:

1. Blink - 1st selection of subframes with my eys
2. WBPP - calibration only
3. 2nd selection of subframes using SubframeSelector
4. WBPP without Bias, Dark, and Flat to integrate subframes

Do you think it is this right process or is there better process?
 
Hi @Masahiko,
I assume that you measure the calibrated/debayered data, not the registered images. Measuring frames before registration is a more precise approach in terms of extracting image properties like FWHM or noise since the images are not yet affected by the interpolation introduced by the registration process. This way, the measured information is representative of your original data and is still meaningful if you want to perform drizzle integration later.

To first compare performing a manual rejection by blinking and using SS or letting WBPP perform the whole job to produce the master light, you have to understand how the weighting method works and if you really want to put in place a weighting/selection criteria that make a real difference and/or bring concrete benefits to the result.
The weighting methods are quite powerful in terms of producing extremely good results by properly handling good and bad frames, and you have more than one method depending on the objective you want to achieve, like maximizing SNR or balancing both SNR and star quality for an overall good result.

The initial blinking may be useful to discard really bad frames with poor tracking or heavy clouds. Blinking is always a good practice I think everyone puts in place to give a look at the data.

Performing a manual selection of the frames using whatever criteria may be useful to save execution time, but I would claim that it won't produce significantly better (if not worse) results unless you aim for some different objective than the ones mentioned above. The weighting methods already assign a low weight to "bad" frames; the key point is what "bad" means in terms of the quality objective you have in mind.
What you may detect as a bad frame may still include some signal that is worth including. If you have a faint object, it may be even worse to remove them instead of simply assigning a low weight. There is also the consideration that if you have the minority of frames with poor tracking, then they should not ruin the shape of the star since the worse frame's contribution should be rejected. Finally, you have the option of setting a minimum weight level for the images to be integrated; with this parameter (under Light settings in WBPP), you will not integrate the frames with too low weight values.
Finally, with the arrival of the new WBPP 2.5.0, that filtering operation will be performed by WBPP itself.

So, resuming, there is nothing really against measuring and evaluating your data; the point is how much effective you could be by simply removing "bad" frames with respect to what the weighting systems already do to get a better SNR or to get an overall better image.
I think that "removing bad frames" is an old-school wrong way to do it :)
 
Thank you for your great reply. I am so surprised by your comment.
What I did so far is following 2 manual removing bad frames
(1) Blink
(2) Subframe Selector

You mean that (1) is OK, but (2) is not good, because poor images still contribute to improve the final result of quality by weighting method.
Is it correct? I will try it!
 
I think you have different objectives when using these tools.

Blink is used (at leas by me) to sort out the really bad stuff. Images with strong clouds, tracking errors leading to double/triple stars everywhere in the frame, that kind of stuff.

SubframeSelector can be used to sort out frames that have a low quality in a certain metric.

This is why it is beneficial to sort out frames using Blink, but it probably won't be beneficial to do the same using SubframeSelector. At least under most conditions.

CS Gerrit
 
Thanks. I had a same strategy. Using Blink for removing really bad images, and using Subframe Selector for removing by metric.
Considering the comments above, I will trust weighting method after removing bad images by Blink.
 
Hi @robyx,

How do the weighting calculations handle the situation where the night is mostly clear but has periods of high thin clouds? I used to think the SNR would get calculated as being better since the median value of the frames was much higher than others that didn't have passing high clouds. Based on what you're saying, maybe the algorithms are not fooled by such things?

I noticed, for example, using Subframe Selector to look at a set of calibrated images from last week that as the median value decreased, the PSF SNR, PSF Signal Weight, and SNR all increased. There were not any high clouds that night, the change in the median value was as the object moved from an area of the sky looking towards a city light dome to an area further away from that light dome (Bortle 7/8 here). Seeing that gives me more confidence that the SNR calculations aren't fooled by increased sky background.

Thanks,
Dave
 
Hi Dave,

You are correct, the New weighting algorithms should not be fooled by thin Clouds or changing altitude. They are working by analyzing the star signal, so recognize those conditions as a low SNR.

CS Gerrit
 
Hi @robyx,

How do the weighting calculations handle the situation where the night is mostly clear but has periods of high thin clouds? I used to think the SNR would get calculated as being better since the median value of the frames was much higher than others that didn't have passing high clouds. Based on what you're saying, maybe the algorithms are not fooled by such things?

I noticed, for example, using Subframe Selector to look at a set of calibrated images from last week that as the median value decreased, the PSF SNR, PSF Signal Weight, and SNR all increased. There were not any high clouds that night, the change in the median value was as the object moved from an area of the sky looking towards a city light dome to an area further away from that light dome (Bortle 7/8 here). Seeing that gives me more confidence that the SNR calculations aren't fooled by increased sky background.

Thanks,
Dave
As @AstrGerdt mentioned, both the PSF Signal Weight PSF SNR handle the presence of clouds and naturally reduce the weights of those frames. So, no worries about it; you'll see how well these methods catch the presence of clouds plotting the weights with SS.
 
robyx,
I would like to know more about your comment.

>There is also the consideration that if you have the minority of frames with poor tracking, then they should not ruin the shape of the star since the worse frame's contribution should be rejected.

Does it mean that the shape of stars are protected by pixel rejection algorithm?
 
robyx,
I would like to know more about your comment.

>There is also the consideration that if you have the minority of frames with poor tracking, then they should not ruin the shape of the star since the worse frame's contribution should be rejected.

Does it mean that the shape of stars are protected by pixel rejection algorithm?
Absolutely yes, as long as the frames with very bad stars are a fraction of the entire set. Imagine an elongated star that sums with a lot of other more rounded stars; in this situation, the elongated part of that star contains values that are detected (and rejected) as outliers by a proper rejection algorithm.
To make sense, of course, the vast majority of the frames should contain a rounded star; otherwise, it is possible that the pixel stacks in the surrounding area of the star contain many variable values with a distribution that does not identify any outlier, with the result of averaging all values (so all frames).

I think Adam Block @ngc1535 mentioned this in one (or more?) public video, don't remember exactly which one ...
 
Thanks. I understand that the situation is same in case of enraged stars with large FWHM as well as elongated stars. I can add these frames to integration.

I will check Adam Block videos!
 
Absolutely yes, as long as the frames with very bad stars are a fraction of the entire set. Imagine an elongated star that sums with a lot of other more rounded stars; in this situation, the elongated part of that star contains values that are detected (and rejected) as outliers by a proper rejection algorithm.
To make sense, of course, the vast majority of the frames should contain a rounded star; otherwise, it is possible that the pixel stacks in the surrounding area of the star contain many variable values with a distribution that does not identify any outlier, with the result of averaging all values (so all frames).

I think Adam Block @ngc1535 mentioned this in one (or more?) public video, don't remember exactly which one ...

It is true, this is something I point out in my instructional videos.
A case in point is the following- let's say you have 20 x 1200sec exposures of a nebula.
Now in 4 frames the tracking/guide (wind, mount...whatever the cause) is poor. These deviations from perfect round stars may have been only a few moments of tracking error. Most users of PixInsight would throw these frames out or have them weighted lowly. I believe this is a waste of 80 minutes of perfectly good signal. Rejection will indeed take care of most of the poor tracking bits - and the benefit is getting to add 80 more minutes of exposure time and four more perfectly good frames for the improvement in S/N (especially the faint signal) in the final integrated result.

Thanks Roberto!
-adam
 
Hi Robyx,

Your statement “What you may detect as a bad frame may still include some signal that is worth including. If you have a faint object, it may be even worse to remove them instead of simply assigning a low weight. There is also the consideration that if you have the minority of frames with poor tracking, then they should not ruin the shape of the star since the worse frame's contribution should be rejected.‘ is very insightful and food for thought.

But this works only for poor frames that are in a minority (as Adam points out). It would be nice if we could “append” to a stack, where the clipping criteria are fixed to the stack, and then additional frames (pixels in them) are tested against those criteria (location, spread). In this way, we could add more numerous “poor” frames, that add statistics to weak deep objects, without enlarging features that already have good statistics, such as stars.

Cheers, Chris.
 
It is great to hear. I understand that I can add minority of bad frames with poor tracking and also add low SNR frames. I still have a question. I attached an example. A bright star of right frames has a halo. It may be caused by thin cloud or fog. I exclude these frames using blink. Do you think excluding these frames is good decision?

Adam,
If you remember the video you mentioned, please let me know. I am a member of your course (Fundamentals and Horizons).
スクリーンショット 2022-09-17 9.02.27.jpg
 
I try to test it.
Left image is integrated master with good 24 frames.
Right image is the one with good 24 frames and 6 bad frames. Bad frames includes low SNR or stars with halos mentioned above.
スクリーンショット 2022-09-17 10.48.11.jpg


For my eyes, right image with bad frames is better than left. Signal of NGC 6822 is stronger and more molecular clouds surrounded.
I checked the score of PSF Signal of 6 bad frames, and found that they are lower than that of good frames.
When I checked rejection map, halos are correctly removed.


スクリーンショット 2022-09-17 10.51.48.jpg
 
Last edited:
Back
Top