WBPP latest build possible memory Leak ?

Al Ros

Member
Hi

I have been using WPBB with great success. but recently in running larger data sets (1000 subframes), the process initially bogs down at image integration (ie running for 24hrs when image integration done manually on the WPBB registered images only takes 3 hours. I have an windows 10 i7 (4 years old) 32 gig ram, and two fast solid state drives for cache.

yesterday, the process process crashed the computer and low resources were mention. Not sure if anyone else has experienced this. This is my first time using large sub frame sets of over 1000 images

Also if I rerun the WBPP process, in contrast to a video by Adm Block, WPBB does not recognize that the initial steps were completed and appears to restart all the calibration set rather than jump to the uncomplete integration

best wishes
Al
 
Hi @Al Ros,
You probably encountered a memory exhaustion problem. When a process runs out of all available memory (RAM + Virtual), the OS kills it.
What you can do is increase the virtual memory handled by your OS; there are several guides around to do that.

Regarding WBPP not tracking its execution once PI has been killed, I will integrate some strategy to incrementally track the execution and update the cache while running. This will allow you to skip the steps that were executed successfully.

Robyx
 
Thanks so Much Robyx ! will try your tips and thank you for considering incrementally tracking WBPP completed steps

best wishes
Al
 
Thanks @robyx, I'm also running into this (was trying a dataset of 2330 images, looks like 64GB of RAM isn't enough after it failed 7 hours in, not sure 128GB would be enough either). Being able to pick up where it left off would be great once I figure out where I need to be memory wise. If we were to do the integration manually, what keyword is used for the weighting? I also noticed it still tries to do the drizzle integration even if the regular integration failed.

The landscape of image acquisition has rapidly changed in recent years with low noise high resolution cameras, massive datasets seem to be the norm these days. Do you know if this is having any influence on the pre-processing tools or how they use hardware? A beowulf cluster comes to mind
 
good points mar504. I am going to be replacing my computer soon and would be great to get some guidance for these large data sets. You also asked what I forgot ask: what keyword for weighting

thanks
Al
 
Thanks @robyx, I'm also running into this (was trying a dataset of 2330 images, looks like 64GB of RAM isn't enough after it failed 7 hours in, not sure 128GB would be enough either). Being able to pick up where it left off would be great once I figure out where I need to be memory wise. If we were to do the integration manually, what keyword is used for the weighting? I also noticed it still tries to do the drizzle integration even if the regular integration failed.
Good point, trying to drizzle when Image Integration fails is worthless since Image Integration would not update the drizzle files. I will update WBPP accordingly.


The landscape of image acquisition has rapidly changed in recent years with low noise high resolution cameras, massive datasets seem to be the norm these days. Do you know if this is having any influence on the pre-processing tools or how they use hardware? A beowulf cluster comes to mind
Yes, and we are aware of this. Indeed one current development line is about handling such a large amount of data.
New processes are under development to handle this specific case.
 
Thank you again @robyx for the tip about increasing the paging file, this allowed me to complete the integration. Took my Ryzen 3900XT 3 days and 5 hours to complete, but I now have an image ready for processing (woohoo)! If any future pre-processing tools need testing for GPU acceleration, require use of multiple PCs for parallelization, or need some huge data sets I'm happy to help test!
 
Good point, trying to drizzle when Image Integration fails is worthless since Image Integration would not update the drizzle files. I will update WBPP accordingly.



Yes, and we are aware of this. Indeed one current development line is about handling such a large amount of data.
New processes are under development to handle this specific case.
Hi Robyx, Is the current development going in the direction of GPU acceleration? I am about to buy a new laptop, and have a choice of integrated Xe graphics or RTX3060 graphics. I generally own a laptop 4 years.

Thank you,
Roger
 
The landscape of image acquisition has rapidly changed in recent years with low noise high resolution cameras, massive datasets seem to be the norm these days. Do you know if this is having any influence on the pre-processing tools or how they use hardware? A beowulf cluster comes to mind
As CMOS cameras have reduced read noise vs CCD, it is recommended by Dr. Robin Glover to reduce exposure times to have better guiding and less wasted capture time if a cloud moves by. These many short subs adds to the files and GB to be processed, etc, etc. Currently I am going in that direction with my CMOS camera, but I am live stacking 20 x 30s (=600s exposure) frames instead of saving the individual files. So 1/20th the drive space needed. Previously with CCD I imaged 600s subframes.

Diffraction Limited (SBIG) now has CMOS cameras with live stacking 16 frames before download. My poor man solution is to use SharpCap to live stack 20x30s subs. I do not drizzle as I am not under-sampled. SharpCap can on the fly filter bad images by monitoring 1.) clouds (brightness changes) and 2.) bumps or wind pushes to the OTA (FWMH changes) to automatically reject those. This is much better than rejecting an entire 600sec sub for 1 minute of clouds.

Roger
 
Hi Robyx, Is the current development going in the direction of GPU acceleration? I am about to buy a new laptop, and have a choice of integrated Xe graphics or RTX3060 graphics. I generally own a laptop 4 years.

Thank you,
Roger
We have very promising preliminary GPU results, but this development branch is currently on hold. Anyway, addressing critical processes to improve execution speed has become a priority topic due to the increasing amount of data the average user has to process. So I expect that this branch will continue, and, hopefully, the first processes boosted by GPU implementation will be distributed the next year.
 
Robyx,
Thank you for your reply. Certainly getting the wide range of computers and GPU's to function well enough to release must be a huge task. Too many hardware/software and operating systems to deal with. We thank you for your efforts, and hope that 2023 with have the GPU acceleration path dusted off and raised to a priority.
Roger
 
Back
Top