Is there a reason debayering and noise estimation is a sequential operation in the current version? The total processing time would decrease quite a bit if it could occur in parallel like CosmeticCorrection is currently doing. Only using 1 of 8 cores seems like a waste of resources
Craig
Juan, any chance more multi-threading can be added to these areas?
Craig
Hi Craig,
All processes currently executed by the BatchPreprocessing script are parallelized. However, not all of them are parallelized in the same way, neither all of them are using multithreading with the same efficiency for large batch tasks.
The ImageCalibration and CosmeticCorrection tasks use high-level parallelization. Basically, to process N files with P worker threads, each worker thread runs independently for a sublist of N/P files. There is an additional coordination thread for worker thread supervision and file reading/writing tasks. For large file lists, this is usually the most efficient implementation. ImageIntegration also uses a very efficient multithreading scheme.
Other processes use low-level parallelization, which in some cases is not appropriate for the BPP script. For example, some parts of the StarAlignment task are parallelized, such as the star matching and RANSAC routines. This is very efficient to process a few images (e.g. for mosaic construction, or to align less images than the number of processors available), but rather inefficient to align large sets of disk files. A future version of the StarAlignment process will use a high-level parallelization scheme, which will improve the performance of the BPP script considerably. This is near the top of the to-do list.
The whole PixInsight platform will be improved significantly for multithreaded execution in future versions. There is still a lot of work to do in this regard.