Author Topic: PI could run much faster if Graphics card CUDA parallel processing were enabled  (Read 6671 times)

Offline hydrogenalphaspace

  • Newcomer
  • Posts: 1
from System requirements section of PI website:
 GPU Acceleration
"As of writing this document (February 2014), the current versions 1.8.x of PixInsight don't make direct use of Graphics Processing Units (GPUs). Hopefully this is going to change during 2015. We are working to implement GPU acceleration via CUDA programming on systems with NVIDIA graphics cards. "

Any word on the timeline to implement CUDA utilization?

My GTX 980 GPU has 2048 cores to do math calculations simultaneously.
Instead PI only uses my I7 CPU 6 cores.

"CUDA® is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU)."
...GPU computing is possible because today's GPU does much more than render graphics: It sizzles with a teraflop of floating point performance and crunches application tasks designed for anything from finance to medicine."


http://www.nvidia.com/object/cuda_home_new.html

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Quote
Any word on the timeline to implement CUDA utilization?

Not one word, but three: time, time, and time. Aside from Carlos Milovic, who is working on critical development projects such as TGV, and a little bunch of developers that are actively working on very specific areas (Mike Schuster, Andrés del Pozo, Georg Viehoever, Klaus Kretzschmar, and a few others), I am completely alone. I just cannot afford stopping everything for months to start a CUDA support implementation in PixInsight. I must work on hundreds of priority tasks, including bug fixes, compatibility problems on Windows, OS X and new Linux desktops, user support, forum and website maintenance, new tools, improvements to JavaScript and C++ development frameworks, GUI improvements, and testing everything. Not to mention documentation, which is our eternal pending task, and writing development and image processing tutorials, which are essential tasks that I have had to abandon completely.

That said, the importance of GPU acceleration in an application like PixInsight (and, to the same extent, any 2D, high-level image processing application) is being highly overestimated IMHO. It's true that a relatively simple implementation would speed up significantly a number of tasks (basically, any task whose running time is dominated by convolutions), but the neat benefit wouldn't be really important for most users. It is also true that there are other important CPU-intensive tasks, such as TGVDenoise for example, which would benefit significantly, but this would require a complex implementation and hence a lot of time.

We have many important priorities before GPU acceleration. For example, a critical process such as StarAlignment would benefit enormously from a high-level CPU parallelization, similar to the one that I implemented for ImageCalibration. This would accelerate image preprocessing by at least one order of magnitude, especially for large data sets. Another example: an integrated C++ compiler. This will simplify deployment of open-source modules, making C++ development as easy, robust and platform-independent as JavaScript development in PixInsight. Currently I prefer to invest our very limited resources in writing and testing new exciting tools, such as TGVRestoration and new background modeling and image registration tools. The XISF project is also very important---despite many people do not, or do not want to, understand it.

This is not to say that I don't want to implement GPU support in PixInsight. It's just the opposite, but unfortunately it is something we cannot afford at present. Hopefully in 2016, but I'm not sure.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline mcampagna

  • Newcomer
  • Posts: 4
Hi Juan and Team,

Sorry to resurrect such an old thread; but this is the most current statement I can find from the PI team on this topic.  I totally get that you're focusing limited resources on your priorities but, as someone with a decent nvidia GPU in my work station, I'm wondering if there are any more current updates on outlook for GPU support implementation in PI.

Still on the back burner?

Thanks,
Matt

Offline msmythers

  • PTeam Member
  • PixInsight Jedi
  • *****
  • Posts: 1178
    • astrobin
Matt

I have a very new 10 series Nvidia card and would love to see it used if it would help. I bought the card for it's internal X.265 encoding abilities, not for gaming.

I don't know if you saw this posting. Carlos had a comment at the end of the thread that I found interesting.
https://pixinsight.com/forum/index.php?topic=9689.msg61526#msg61526

Don't forget PixInsight is multi-platform with all the extra headaches that brings to coding. Implementing this across all 4 systems might be nightmare.   


Mike

Offline mcampagna

  • Newcomer
  • Posts: 4
Thanks Mike, that is an interesting post and I had not come across it -- I admit I had not considered platform differences; I am pretty naive here having no software development background.

Don't get me wrong; while GPU acceleration is definitely on my 'wish list', I do appreciate all the continuous platform improvements from the PI team


Offline trev27

  • Newcomer
  • Posts: 3
Agreed. Been keeping an eye on the CUDA news for a while as well. My notebook has a 970M and have a GTX 1070 on the way. Would love to make use of all these cores but as a software developer I can see how big an ask it is, even moreso due to it's multi platform codebase.

But what a great product. I'd be lost without it. Gone are the days of the marathon Photoshop sessions. Will continue to love it and when or if  cuda comes will love it even more  :)

T

Offline ChoJin

  • PixInsight Addict
  • ***
  • Posts: 106
IMHO, having implemented a bunch of gpgpu algorithms, cuda is a bad choice.

OpenCL would be more appropriate: it works with any gfx cards and you'll get the same performance.