Choosing the Correct PSF Algo for weighting

BlueRidgeDSIA

Well-known member
Im integrating 900s, 600s, 300s, 180s, 60s, 30s, and 10s subs all together into the same image. I have between 40 and 100 images for each exposure. What is the appropriate PSF algorithm to choose in subframe weighting if I want to reject clipped highlights in the 900s images and reject dark pixels in the 10s subs?
I see PSF signal weight, psf snr, and psf scale snr. Im just going to experiment for now and begin with PSF signal weight
 
Im integrating 900s, 600s, 300s, 180s, 60s, 30s, and 10s subs all together into the same image. I have between 40 and 100 images for each exposure. What is the appropriate PSF algorithm to choose in subframe weighting if I want to reject clipped highlights in the 900s images and reject dark pixels in the 10s subs?
I see PSF signal weight, psf snr, and psf scale snr. Im just going to experiment for now and begin with PSF signal weight
Have you explored any of the HDR tools for combining these kinds of images? That seems like the better approach.
 
Have you explored any of the HDR tools for combining these kinds of images? That seems like the better approach.
Yes but I do not want to use HDR. As I recall, Juan mentioned this tool was effective for integrating images with various exposure length.s
 
Ok my next question is in addition to my first.

How can I integrate multiple exposure lengths into 1 single image in WBPP? I want to apply Subframe weighting ~ using PSF Signal Weight, register, apply the astrometric solution, apply local normalization - if that is even possible with subs varying in exposure, and integrate all of the images into one image.
 
Ok my next question is in addition to my first.

How can I integrate multiple exposure lengths into 1 single image in WBPP? I want to apply Subframe weighting ~ using PSF Signal Weight, register, apply the astrometric solution, apply local normalization - if that is even possible with subs varying in exposure, and integrate all of the images into one image.
You can adjust the exposure tolerance in the Post-Calibration tab. In your situation if you want all your frames from 10s to 900s
in the integration you can set it to 900.

Yes but I do not want to use HDR. As I recall, Juan mentioned this tool was effective for integrating images with various exposure length.s
I don't really understand your point because you have images with different exposure times?
It seems to me that if your goal is to get a better dynamic in your image then HDRcomposition is a better strategy. Grouping images with different exposure times in the same integration is never optimal in my opinion (of course, sometimes if it's the data you need to manage, it's still a valid option)
 
Ok my next question is in addition to my first.

How can I integrate multiple exposure lengths into 1 single image in WBPP? I want to apply Subframe weighting ~ using PSF Signal Weight, register, apply the astrometric solution, apply local normalization - if that is even possible with subs varying in exposure, and integrate all of the images into one image.
I don't understand your intent. Normally, there are just a couple of reasons you'd have different exposure times. You could have differences in conditions over multiple sessions, but those would not be radically different exposures- say 120 sec and 300 sec. You'd integrate those in WBPP using the exposure tolerance setting described by Nico.

Or, you could be dealing with a high dynamic range object like M42, in which case you'll have short exposures for the bright areas and long ones for the dimmer parts. You have to combine those images using one of the HDR techniques, that scales matching pixels between frames to keep the data linear.

SFS can't do anything for you here. All it does is rejects frames based on some criteria. But you don't want to reject any frames! (WBPP uses SFS more intelligently, to weigh the quality of frames and then scale them during integration. But I don't think that's quite what you want to do, either.)
 
Why not? What is the point of taking short exposure images if it is not to increase the dynamic range?
Because I am going to do HDR integration later if I am unsatisfied and mask it onto the data set which has been integrated from every single sub.
 
I don't understand your intent. Normally, there are just a couple of reasons you'd have different exposure times. You could have differences in conditions over multiple sessions, but those would not be radically different exposures- say 120 sec and 300 sec. You'd integrate those in WBPP using the exposure tolerance setting described by Nico.

Or, you could be dealing with a high dynamic range object like M42, in which case you'll have short exposures for the bright areas and long ones for the dimmer parts. You have to combine those images using one of the HDR techniques, that scales matching pixels between frames to keep the data linear.

SFS can't do anything for you here. All it does is rejects frames based on some criteria. But you don't want to reject any frames! (WBPP uses SFS more intelligently, to weigh the quality of frames and then scale them during integration. But I don't think that's quite what you want to do, either.)
Wait so does SFS cause the rejection of an entire frame or does it weight by implying some form of volatility such that an image with over or underexposed regions are clipped during outlier rejection?
 
Wait so does SFS cause the rejection of an entire frame or does it weight by implying some form of volatility such that an image with over or underexposed regions are clipped during outlier rejection?
SFS calculates a lot of useful metrics for an image, which can be used to sort a batch of images based on the quality metrics you select. Once sorted, those you want to keep are selected and saved. "Underexposed" doesn't really have a lot of meaning. You might be able to identify images with too many saturated pixels. But the keep/reject decision is always binary. I don't see how you can use it to somehow improve S/N or create a kind of HDR analog. With the images you have, I really think you want to use a true HDR process to combine them. That's a rigorous process that doesn't require subjective masking, and which maintains a rational intensity relationship between all the pixels... something your approach will not.
 
Integration ignores frames with a weighting less than a certain percentage of the best. At 5% if the best weighting is 100, any frame with a weight of 5 or less won’t be included. I very much doubt your shorter frames will be included against 900 second frames. Stacking 10s frames with 900s frames really doesn’t make much sense. Try stacking the 900’s and 600’s and compare to a stack of them all. I bet you don’t see much difference
 
A weighting algorithm is not going to help with clipped pixels. You need to enable clip low range and clip high range in ImageIntegration and adjust the range low and range high sliders.
 
SFS can't do anything for you here. All it does is rejects frames based on some criteria. But you don't want to reject any frames! (WBPP uses SFS more intelligently, to weigh the quality of frames and then scale them during integration.

"All it does"? Have you even used SubframeSelector? It weighs and/or rejects just like you tell it to.
There's nothing intelligent about how WBPP does it.
 
"All it does"? Have you even used SubframeSelector? It weighs and/or rejects just like you tell it to.
There's nothing intelligent about how WBPP does it.
The latest weighting algorithms developed by the Pix team are accessible in both WBPP and SubframeSelector:

I think saying there is "nothing intelligent" about them isn't really fair!
 
"All it does"? Have you even used SubframeSelector? It weighs and/or rejects just like you tell it to.
There's nothing intelligent about how WBPP does it.
Since WBPP uses SFS for sub evaluation, I don't understand this comment.
1709894735838.png
 
"All it does"? Have you even used SubframeSelector? It weighs and/or rejects just like you tell it to.
There's nothing intelligent about how WBPP does it.
WBPP doesn't reject frames (except rarely). It uses the SFS weights to adjust how much they contribute to the image. With SFS by itself, you either keep or reject frames, and the rejected ones add nothing to the final image. There is generally no good reason to use SFS to reject frames. Including the frames you would opt to reject will produce a poorer image than keeping them and allowing WBPP to scale them.
 
WBPP doesn't reject frames (except rarely).
This is configurable (if somewhat opaquely). In the "Image Integration" settings in the "lights" tab is the "Minimum weight" parameter:
1709909947711.png

This specifies that any frames with weight less than 0.05 x the maximum weight (i.e. less than 5% of the maximum measured weight) should be rejected.
Note that this is not rejecting 5% of the frames; nor is it rejecting frames with a weight below 0.05. If there are any frames with weight lower than 5% of the maximum (measured) weight, they will be rejected.
By increasing this parameter you can increase the likelihood of frames being rejected.
 
I have been experimenting alot today with ROI integrations. WBPP is not ROI friendly. I really think that would be a great module actually, a Bay of 10 roi windows for a pre integration analysis which you could iteratively tweak rejection parameters, generate noise statistics etc, and choose the best roi sample to base your final integration on. For me on my older i7 5960x with 32gb ram, it took 6 hours to run WBPP on 310 calibrated images from registration, l norm, through integration. I ended up using the registration frames and lnorm data to manually integrate so I could use roi.

Below I have shared the results of my integration. I ended up keeping just shy of 20 hours of data.
The calibrated light exposures used were from 2 years of data. Some data was captured at gain 100 some was gain 0
no filters used here, just natural osc with a Sharpstar 76EDPH at F4.5

20 x 900s
59 x 600s
32 x 300s
54 x 120s
30 x 30s
115 x 10s

I will be doing the Pixinisght HDR process as well, however due to field rotation and some sets of images not overlapping, there are bad gradients which made gradientHDRcompostion go nuts. I was able to get it to work well with the trapezium area so I may just mask that onto the data after I process it.

Below is just the unstretched integration of data and then another image showing the zoomed out screen stretch of the entire image showing how bright the core is.
 

Attachments

  • Orion data un stretched.jpg
    Orion data un stretched.jpg
    105.8 KB · Views: 24
  • orion data stretched.jpg
    orion data stretched.jpg
    557.3 KB · Views: 23
I wanted to come back and bring up the reason I did not want to start out with an HDR process. The SNR you get with an HDR integration, for me at least, is never as good as the integration I get from running every single sub into a single image. Which if you think about extracting a sample from a population in order to produce linear regression statistics to identify the sample and outliers, it makes sense. The cleaner sample comes from the larger population of like data.
In HDR composition I integrated the registered masters from each stack. I had to combine the 10s 30s and 120s in one stack
and then the 300s 600s 900s in a second HDR Combination stack. I then combined those 2 stacks to create a single master HDR combination file containing 10s all the way through 900s data. It is much noisier than the stack of all the data integrated into one image. I really hope that Juan or someone will consider a way of incorporating an HDR integration model that does not require combining master data. For now Im going to try and figure out a way to just mask the HDR data core onto the data which contains everything integrated into one image.

Below is a closeup of both images to show the difference in noise, visually. I just clicked the nuke stretch button to display each one. On the left you see every sub integrated into 1 single image. On the Right you see a master HDR sub containing 10s up through 900s data that I mentioned in post 18. - Also note, I applied background neutralization and gradient reduction to these images and the color balance of the two images is not identical due to my sample placement not being in the same spot.image_2024-03-16_215406784.png
 
Last edited:
I think you are missing the point of HDRComposition and you are using it in the worst way possible here

When you said:
In HDR composition I integrated the registered masters from each stack. I had to combine the 10s 30s and 120s in one stack
and then the 300s 600s 900s in a second HDR Combination stack

What do you mean? If you mean that you use the process HDRcomposition with the 300s, 600s and 900s stacks then it does not make sense in my opinion and you are indeed loosing data.

The aim of HDR is to improve your dynamic range, and for your M42 target the aim is to reveal the core of the nebula. To do that you need to compose stacks with a meaningful difference of exposure time. Typically a very short exposure time to deal with the core with a much longer exposure time to deal with the rest of the image. Your 300s, 600s and 900s will not change anything regarding M42's core and, when you combined them through HDRcomposition you are probably mostly keeping the information from the 900s stack and loosing everything else. Just look at the composition mask.


Then when you said :
I then combined those 2 stacks to create a single master HDR combination file containing 10s all the way through 900s data

What do you mean? Did you use HDRcomposition to combine the 2 stacks or something else?
If you use HDRcomposition you are saving the dynamic range but you are loosing data and if you are using another mean (Pixelmath or ImageIntegration) then you are loosing dynamic range.
This 2-step approach cannot be good

With your data here is what I would try:
Integrate all your exposure of 30s and more into one stack
Integrate your 115X10s files into another stack
Use HDRComposition to compose these 2 stacks.

With this approach I believe you'll have the best possible SNR for you image and you'll have information in the nebula's core

And for the next target I would try to avoid taking so many different exposure times and use maybe 2 or 3 different exposure times maximum for hight dynamic targets.
 
Last edited:
Back
Top