fits average?

Broz

Active member
This may have come up before, but a search here didn't find anything. Anyway, the trend is to lots of short subs with the modern CMOS cameras. I've just run into memory problems (32 GB on a linux system) trying to do 350 20 second subs. There is no real reason to do 20 seconds except to avoid saturating bright stars. My guiding could support longer. So my question is wouldn't it be a fairly simple script or process to take a directory of subs and output half or 1/3 as many with the FITS pixel values being an average of the input fits pixels, and the header adjusted to a synthetic longer image? If the combined FIITS are OK within your guiding limits, wouldn't this be a simple solution to some of the memory issues in processing? I posted something about this in the Sharpcap forum, but Robin didn't think there was any real need for such a thing. I find that there is. Thanks,
John
 
I forgot to mention that there is a github program called ccdproc that may have a lot of the armament needed to do this.
John
 
i guess it would work but without any registration eventually you'll be integrating images that are not well registered. even with guiding you could have enough differential flexure or periodic error in the RA worm that all the images won't perfectly overlap each other...
 
Understood, but I was just talking about something like averaging 2, 20 second exposures to make a set of 40 second exposures - cutting the amount of processing memory required in half for some processes that have all the lights in memory simultaneously. I've done 120 seconds guided with a filter without much guiding error, so I don't think 40 seconds or even 60 would be a problem. I only ask because without filters, a 20 second snap saturates bright stars, so I didn't want to go longer. With low read noise, short exposures aren't much of a problem except that they make a lot of images.
 
Been thinking about your registration issue. Perhaps another solution would be to first register and calibrate the set of short subs and then integrate f some number of short subs. The integrated sets would then be integrated together. Cumbersome, but would get around the memory issue maybe? Or is this already doable with the existing tools. Can you integrate previously integrated subsets of the images? Thanks,
John
 
yes that is definitely a strategy to get around memory issues. you can integrate integrations - if you have enough subexposures in every batch then you can get very good pixel rejections and the intermediate masters should be very free of hot pixel/cosmic ray artifacts. then you can just integrate all those intermediate masters without any pixel rejection and weighted by SNR (which is the default ImageIntegration behavior.)

ImageIntegration also has some memory usage controls that trade speed for less memory footprint. these can be set by the user if you untick "automatic buffer sizes" and then you can get some guidance about how to set the controls below that by reading their tooltips.

rob
 
Any idea how NormalizeScaleGradient could work into the integration of integrations scheme? Would I do it in the first integration, second (although you mention no weigts/SNR above), or not at all? Thanks,
John
 
would probably have to check with @jmurphy but i think that NSG would have done its job in the first pass. i think it's only supposed to be run on subexposures anyway...

rob
 
Rob,
You're right - it worked. I had 504 subs (20 sec) after registering and deleting bad frames. I broke the group into thirds (about the most that blink and integration with automatic buffer sizes would handle), ran NSG and integrated each set separately, than integrated the resulting 3 integrated images. Looks pretty good, but I haven't yet gone through the next processing steps of the integrated image to compare with some previous stuff. I don't know if there is any time saved in the subset integration process compared to just setting a fixed buffer size, but it had to be done anyway to run blink. Tonight with an L-Extreme filter will be ~120 sec subs which will be much easier to process (I hope). Thanks for your help.
John
 
Any idea how NormalizeScaleGradient could work into the integration of integrations scheme? Would I do it in the first integration, second (although you mention no weigts/SNR above), or not at all? Thanks,
John
NormalizeScaleGradient can be used to combine three stacks into one. The main advantage of doing it this way is that NSG will calculate accurate weights for each stack. The stack of stacks will then be optimum.
 
John,
I ran NSG on the 3 stcks before integration, using the same sub as reference for all 3. Are you saying that I should now run it again on the 3 integrated stacks before integrating them? If so, would I use the same reference as the first NSG set of runs? thanks for your response.
John
 
I ran NSG on the 3 stcks before integration, using the same sub as reference for all 3. Are you saying that I should now run it again on the 3 integrated stacks before integrating them? If so, would I use the same reference as the first NSG set of runs?
Yes, use NSG when creating the three stacks. Assuming they all used the same filter, you can use the same image as reference for all three stacks, but remember to include that 'shared' reference in only one of the target lists.

When you then run NSG on the three stacked images, the choice of reference image doesn't matter very much. In this case NSG is only being used as a way to accurately determine the relative weights of the three stacked images.

When stacking the three stacks, you will of course want to select 'No rejection' as the Rejection algorithm.

John Murphy
 
Last edited:
Rob,
You're right - it worked. I had 504 subs (20 sec) after registering and deleting bad frames. I broke the group into thirds (about the most that blink and integration with automatic buffer sizes would handle), ran NSG and integrated each set separately, than integrated the resulting 3 integrated images. Looks pretty good, but I haven't yet gone through the next processing steps of the integrated image to compare with some previous stuff. I don't know if there is any time saved in the subset integration process compared to just setting a fixed buffer size, but it had to be done anyway to run blink. Tonight with an L-Extreme filter will be ~120 sec subs which will be much easier to process (I hope). Thanks for your help.
John
 
NSG is doing great things for me, thanks so much John. One other question - how would SubFrameSelector fit into the workflow now? NSG provides NWEIGHT for integration. I used to use SSWEIGHT generated from Chris Foster's SFS weight formula incorporating several variables, and don't see if this is necessary or doable now. Thanks,
John
 
NSG is doing great things for me, thanks so much John. One other question - how would SubFrameSelector fit into the workflow now? NSG provides NWEIGHT for integration. I used to use SSWEIGHT generated from Chris Foster's SFS weight formula incorporating several variables, and don't see if this is necessary or doable now. Thanks,
John
See my answer here:
 
John,
Nevermind about my question on SFS,I saw your post in another thread that addresses this. Sorry for not looking sooner.
John
 
Back
Top