Anyone use the Fast Integration tool yet?

Hind-Sight

Active member
I scanned the info in the release notes about it. I'm trying to get started with lucky imaging of small planetary nebula. I'm capturing the outer portions with my usual 300s exposures and stacking, but the bright cores I'm now trying to get via lucky imaging / fast imaging. The issue is that at f10 and 2000mm focal length, plus short exposure time, I'm either not getting stars or I'm not getting stars of the right size, shape, and definition to be detected by the star-detection algorithms. Standard WPBB doesn't really work for that situation - wondering if Fast Integration requires stars for alignment/stacking? If so, have any of you figured out a way to make it work at long focal lengths and short exposures? Thanks!
 
I strongly recommend you watch the two videos on fast imaging we recently uploaded to our official YouTube channel. These videos include an introduction to the new FastIntegration process and the required settings in the WBPP script:



More official videos are to come on the same topic, so stay tuned.

FastIntegration requires at least six detectable stars. We are working on a new FFT-based fast alignment algorithm that will require just one star.
 
Sorry followup question on this - Do you have any tips on ensuring stars are detectable by PI? I have ZERO issues with this when imaging at any focal <= 1450 (1450 is my C8 SCT with 0.63x reducer). That means no issues in PI or in NINA when imaging/guiding.

But removing that focal reducer and imaging at native f10, 2000mm focal length, the stars are large even at optimal focus and exposure. Even NINA + Hocus Focus (with the proper settings for long focal length etc) has trouble identifying the stars. PI has the same trouble. Camera in question is an ASI2600mc. Any advice on this?
 
Yup works great! I wouldn't use this for finished images but great for prototyping until we get more rejection options. (after re-reading the manual I realize how the process works and I have to work on my capture techniques and tweak the settings) I was able to integrate 7200 images minus the failed ones in about an hour on a 16 core i9 with 128GB of RAM.
Kudos to the PixInsight team for making fast imaging more feasible!

https://www.astrobin.com/1mnn0a/
ggYse_FCu-N5_16536x0_xa3OO6_4.png
 
Last edited:
Yup works great! I wouldn't use this for finished images but great for prototyping until we get more rejection options! I was able to integrate 7200 images minus the failed ones in about an hour on a 16 core i9 with 128GB of RAM.
Kudos to the PixInsight team for making fast imaging more feasible!

https://www.astrobin.com/1mnn0a/
Not so *fast* on the FastIntegration result.
Why don't you follow up on the idea... with 7,200 frames (my goodness!)- you should do the experiment once and compare the results from FastIntegration and the standard treatment. If gradients are small and no distortions between frames and the data is commensurate with a reasonable consistency across the set- I would expect the differences to be small- and I wonder if they are small enough to be significant to you?
I have done the comparison in my videos on my site. Your rejection concern seems to be a non-issue with enough frames..that is kind of the point. You can make your batch size sufficiently large to allow for robust rejection...and again, the other rejection methods all become pretty similar with the large number of measurements (values) you have.
(I also released a YouTube video on this new amazing process as well. )

-adam
 
Hi Adam,
I will check your site as I didn't realize you made some training videos about this (let me guess, it's in fast track ;)) and do some more experimenting and report back as I am very interested to know what the difference is! Seeing how much signal and contrast I recovered from just 2 hours worth of data from a bortle 8 zone in poor seeing is simply amazing I will be doing more imaging like this soon and refining my technique so efficient post processing be critical :) Also looks like I will be going back into training mode as a lot has changed since my last experiments with lucky imaging to maximise my results. My CMOS camera produces only produces 0.9 elections of signal noise so that helps too :cool:

Dave
 
Last edited:
Not so *fast* on the FastIntegration result.
Why don't you follow up on the idea... with 7,200 frames (my goodness!)- you should do the experiment once and compare the results from FastIntegration and the standard treatment. If gradients are small and no distortions between frames and the data is commensurate with a reasonable consistency across the set- I would expect the differences to be small- and I wonder if they are small enough to be significant to you?
I have done the comparison in my videos on my site. Your rejection concern seems to be a non-issue with enough frames..that is kind of the point. You can make your batch size sufficiently large to allow for robust rejection...and again, the other rejection methods all become pretty similar with the large number of measurements (values) you have.
(I also released a YouTube video on this new amazing process as well. )

-adam
It's going to be raining for a while and cold so I suppose I will be using my computer as a space heater while I do these tests :) Because I have 128GB of DDR5 RAM, I can use the max batch and prefetch size of 200. I am also rotating the west meridian images to match the east since this was taken with an EQ mount. This is kinda exciting :)

Clear Skies!
Dave
 
Last edited:
Hi All,

Per Adam's suggestion, I did a comparison between the Standard WBPP Integration process and the FastIntegration process to measure how fast it would complete and the resulting image quality.

First of all for reference, here are the specs of my PI dedicated system:
Lenovo P360 running Windows 11 Pro
Intel Core i9 12900 with 16 cores (24 treads)
128GB of DDR5 RAM (Non-ECC)
OS Drive: 1TB NVMe SSD
Image Processing Drive: 2x 4TB WD Black SN850X NVMe PCI-E Gen4 running as an 8TB RAID 0 array (Transfers up to 16GB/s!)
All raw and resultant data is saved on my NAS as you can't trust a RAID 0 array but love it's speed!
Oh and a Nvidia RTX A5000 with 16GB of RAM but irrelevant here :)

The Standard Integration using WBPP well over 30 hours! Actually WBPP kept crashing on the ImageIntegration step as it was trying to load too way many rows of the 7738 remaining images to process after being rejected by the other processes. So it actually was 17 hours WBPP and another 13 hours doing a laborious process of integrating 1000 images at a time in 8 groups then integrating those into the final result. This is the RAW stretched image that was produced below. Keep in mind this was with local normalization and weighting enabled:
LongIntegrationTest.jpeg


The FastIntegration process with the 9000 frames took about 1 hour and 30 minutes after the initial 3 hours to calibrate and demosaic the frames resulting in a total of about 4 hours and 30 minutes! Yes it skips the local normalization but with so many subs, it eventially averages out pretty well but what about the results? This is the RAW stretched image that was produced below:
FastIntegrationTest.jpg


What a single sub looks like!
SingleSub.jpg


A few things to unpack here... First of all the images look similar but actually look a little better on the FastIntegration version. Yes I need more data so I did collect another 8000 subs and hopefully more then I will make a post on how I use FastIntegration to process my many thousands of images to mitigate seeing and light pollution and show my final image of M81. Still the FI image is pretty good for being captured almost at the diffraction limit of the scope! I have since refined my calibration to remove more non-photon noise so the next image will look fantastic! FastIntegration will save me money in the long run with a lower electric bill and time as I saved 25 hours! The image below is why this PC is dedicated to PI. This is not a bad thing, this is by design to process as much and as quickly as possible! Also makes for a good space heater in the winter 🙃
Screenshot 2023-12-23 094902.png


Fast imaging is not for the faint of heart as it can quickly fill up your hard drive especially while processing but so far the results have been pretty amazing especially after deconvolution. I can't wait to show you guys the end result. Until then happy fast imaging!

A very sincere THANKS to the PixInsight team that made this possible!

Clear Skies!
Dave
 
Last edited:
Great Job. FastIntegration really should get more attention. I obviously did this comparison the moment the tool was released to convince myself (and others on my tutorial site) of its effectiveness. I knew in principle it should converge on the standard treatment with the right kind of input data. Demonstrating this for others is powerful- but until you see the results yourself it is hard to persuade users the utility of a "strange" new tool.

-adam
 
Great Job. FastIntegration really should get more attention. I obviously did this comparison the moment the tool was released to convince myself (and others on my tutorial site) of its effectiveness. I knew in principle it should converge on the standard treatment with the right kind of input data. Demonstrating this for others is powerful- but until you see the results yourself it is hard to persuade users the utility of a "strange" new tool.

-adam
Thanks Adam! FastIntegration is one of the best tools that not many people understand or even know about. I plan to help change that :cool:
 
Here is a prototype of the 2x drizzled then resampled back down image from about 13660x1 second frames! I am going to try a few more things to see if I can increase the SNR and possibly gather more data. I am really liking this fast imaging technique from my Bortle 8.5 location and bad seeing. Did all the usually processing like DBE, SPCC, BXT, NXT, GHS, and a little HDRMT for the core. This PNG isn't showing all the dynamic range as after stacking it actually went up to 24bit from 12bit but at least you get the idea :)
- Dave


M81masterLight_BIN-1_3856x2180_FILTER-IRCut_CFA_2x_drizzle.png
 
I have since refined my calibration to remove more non-photon noise so the next image will look fantastic!
Dave, the Fast Imaging result is amazing. Can you elaborate on what you meant by calibration to remove more non-photon noise?


Fast imaging is not for the faint of heart as it can quickly fill up your hard drive especially while processing but so far the results have been pretty amazing especially after deconvolution.
I see your're taking 1s subs. Would you mind posting a link to a xisf/fits file of one sub?

I recently decreased my exposure time down to 30s with my Redcat 71. So far, results are much improved. I never contemplated doing one second exposures, especially at f/10!
 
Dave, the Fast Imaging result is amazing. Can you elaborate on what you meant by calibration to remove more non-photon noise?



I see your're taking 1s subs. Would you mind posting a link to a xisf/fits file of one sub?

I recently decreased my exposure time down to 30s with my Redcat 71. So far, results are much improved. I never contemplated doing one second exposures, especially at f/10!
Yes non-photon noise (photon noise aka shot noise) would be any other noise coming from the camera itself like read noise, usb traffic noise, etc. I am also working out what the best calibration frames and technique to use to make it easier to reveal the signal hiding in the shot noise. I am still in the experimentation process as I haven't done this in a while and a lot has changed including the fact that PixInsight now has FastIntegration process so no other software tools are needed. I will be creating a post once I have the technique worked out out and also show how I process using FastIntegration.

If you are trying out fast imaging, keep in mind that even if the sub-exposures are fast, your total integration time still matters. So if you need 4 hours to get a good stacked image at F10 with 5 minute exposures, the same thing will apply if you are using 5 second exposures. So basically with 5 minute exposures you need 48 subs vs 5 second exposures you would need 2880! The main benefit of fast imaging is to collect enough data to allow the shift-add (stacking) process to have more statical data to work with to rebuild the image to bring out more details and capture details normally obscured by atmospheric distortion (seeing). Unlike lucky imaging which is normally used for bright objects like stars and planets where you only keep 10% of your frames, with fast imaging you keep as many of your frames as you can good or bad (unless there are thick clouds or stars are not visible) and average them (stack) to build your image. With fast imaging it is all about SNR! A faster focal ratio and bigger aperture helps a lot too! My RC8 setup is F5 using a focal reducer and I also have a RASA 8" F2.

Clear Skies!
Dave
 
Last edited:
Yes non-photon noise (photon noise aka shot noise) would be any other noise coming from the camera itself like read noise, dark current, usb traffic noise, etc. So I am working out what the best calibration frames and technique to use other than darks and flats so only shot noise is left which is where the signal is hiding. I am still in the experimentation process as I haven't done this in a while and a lot has changed including the fact that PixInsight now has FastIntegration process so no other software tools are needed. I will be creating a post once I have the technique worked out out and also show how I process using FastIntegration.

If you are trying out fast imaging, keep in mind that even if the sub-exposures are fast, your total integration time still matters. So if you need 4 hours to get a good stacked image at F10 with 5 minute exposures, the same thing will apply if you are using 5 second exposures. So basically with 5 minute exposures you need 48 subs vs 5 second exposures you would need 2880! The main benefit of fast imaging is to collect enough data to allow the shift-add (stacking) process to have more statical data to work with to rebuild the image to bring out more details and capture details normally obscured by atmospheric distortion (seeing). Unlike lucky imaging which is normally used for bright objects like stars and planets where you only keep 10% of your frames, with fast imaging you keep as many of your frames as you can good or bad (unless there are thick clouds or stars are not visible) and average them (stack) to build your image. With fast imaging it is all about SNR! A faster focal ratio and bigger aperture helps a lot too! My RC8 setup is F5 using a focal reducer and I also have a RASA 8" F2.

Clear Skies!
Dave
As a stickler for accuracy when it comes to the details of imaging, I'd like to point out that noise can never be eliminated once it is present, and no calibration method can ever do anything but add noise. It never removes or reduces it. The goal of calibration is to remove unwanted structure, not noise, and to do it while introducing as little additional noise as possible. There will always be instrumental noise, but it may fall below the level of the shot noise created by the sky background.
 
As a stickler for accuracy when it comes to the details of imaging, I'd like to point out that noise can never be eliminated once it is present, and no calibration method can ever do anything but add noise. It never removes or reduces it. The goal of calibration is to remove unwanted structure, not noise, and to do it while introducing as little additional noise as possible. There will always be instrumental noise, but it may fall below the level of the shot noise created by the sky background.
Thank you for the correction. But I have indeed been successful in "lowering" the read noise by using a slightly higher gain and usb signal noise that usually causes the horizontal waves that appear in bias and dark frames by lowering the USB traffic. This was the noise I was talking about and I should not have lumped in dark current because that is a structure that doesn't change very often. I will be try to be more exact next time as I don't want to cause confusion so I updated my post. Feel free to correct me anytime as I want to be as accurate as possible myself :)
 
Last edited:
Thank you for the correction. But I have indeed been successful in "lowering" the read noise by using a slightly higher gain and usb signal noise that usually causes the horizontal waves that appear in bias and dark frames by lowering the USB traffic. This was the noise I was talking about and I should not have lumped in dark current because that is a structure that doesn't change very often. I will be try to be more exact next time as I don't want to cause confusion so I updated my post. Feel free to correct me anytime as I want to be as accurate as possible myself :)
Oh, I don't mean to suggest that there aren't technical ways of reducing instrumental noise. Obviously, cameras have gotten better over the years. (I've been designing CCD cameras since the 1970s.) Just that you can't remove noise once it is present in the data. Or calibrate it away.

No camera should create any noise at all because of how its USB (or other data channel) is read. Any camera that does is either damaged, or the design is bad.
 
Back
Top