...The worst part may be GradientMergeMosaic. It will have to generate temporary working duplicates. Georg should have more precise information here....
You are probably right:
- GradientMergeMosaic currently converts everything to double precision, holding all channels of an image concurrently.
- It processes one image at a time, and holds gradients in X and Y direction plus a weight matrix (necessary to see which parts of the image actually have data or background, and to support feathering)
- Finally, it generates the laplace matrix and solves it via FFT, which also requires considerable memory (and runtime).
I never looked into saving on memory (which no doubt would be possible), into the runtime scaling (its probably linear in the number of images, and n*log n in the number of pixels, but I dont know for sure), nor into processing huge mosaics (which would be possible using different algorithms). My recommendation, which is probably valid for any processing with unusually large data:
- start by doing tests with a downsampled version of the problem. Quality is not important here. Observe runtime and memory consumption.
- proceed by enlarging the problem by factors of 1.5 or 2, again looking at runtime and memory consumption.
- when you have enough runs, extrapolate required memory and runtime, and see if this is within your reach.
- If you dont do this, you will spent a lot of time in runs that ultimately fail, or that run "forever" until you cancel them (and you will never know if it was just seconds away from producing a result).
I currently dont have the time to optimize GradientMergeMosaic...but the source is out there for any volunteers.
Georg