GPU acceleration for PixInsight with Linux (Kubuntu or Ubuntu) using RC Astro Tools (eg StarXterminator) or Starnet ++

lblock

Active member
I just purchased a Windows HP laptop with a Ryzen 9 processor and an Nvidia RTX 4070 GPU. After many attempts I found a way to make it work under Kubuntu which PixInsight recommends. These steps are from various posts and also from me.


INSTALL UBUNTU


get a USB stick with 64 gb or greater

First install ubuntu .

Download rufus

https://rufus.ie/downloads/

download the ubuntu 22.04 iso

https://ubuntu.com/download/desktop

insert the USB stick in your computer.

in rufus select your USB stick as your device and the ubuntu iso as your iso. Change the partition scheme from MBR to GPT

Rufus will copy the ubuntu iso file to the usb stick

reboot and press the key which will get you into your bios

go into your computer bios and set your computer to boot from usb. put it first in the boot order

then reboot .

instert the Usb stick you created from Rufus

Ubuntu will boot up.

You will see two choices.. try Ubuntu or install Ubuntu

I selected to install Ubuntu with normal installation. Do not select Install third-party software for graphics. You will install this later.

I selected Ubuntu to install alongside Wiindows 11. I then dragged the divider to allocate the amount of drive space I wanted for Ubuntu. This will install a boot option menu when you boot your computer. When you reboot choose the Ubuntu option
You can still use Windows by selecting the windows option.
If you boot into windows it may change the boot order and you will boot into Windows on startup. To change that, go into your bios and change the boot order for Ubuntu first and windows second.




Once Ubuntu is installed and you have booted into it, open the Ubuntu terminal


first update and upgrade your machine

sudo apt update
sudo apt upgrade



Install the GCC compiler

sudo apt install build-essential



Install Nvidia driver 535


sudo apt install nvidia-driver-535



check to see if it installed


nvidia-smi


It will say driver version 535.161.07 and CUDA 12.2.
CUDA 12.2 is not actually installed. Nvidia-smi refers to the highest version CUDA driver that the Nvidia driver 535 can support.



Install CUDA 11.8

The following command has a truncated blue link. Follow the directions immediately below it for it to work.

sudo wget https://developer.download.nvidia.c...al_installers/cuda_11.8.0_520.61.05_linux.run

** Note the above blue link is truncated. Right click the above blue link and copy link address (this copies the entire link). Then in the Ubuntu terminal type "sudo wget " (make sure there is a space after wget) and paste the link after that (right click paste). Then press enter and it should work. This is the only truncated link in this guide. You can copy and paste all subsequent commands as you normally would into the Ubuntu terminal **


sudo sh cuda_11.8.0_520.61.05_linux.run

during install it may say you already have a driver installed and recommend you not continue. Bypass this and continue.

It will ask you to accept. Type accept, but on the next screen if you see a CUDA driver selected, unselect it. Do not change the other options. The installation will take a little time.

next edit the environment .bashrc file

sudo nano ~/.bashrc

then go to the bottom of the file and copy the following and paste it to the file


export PATH="/usr/local/cuda/bin:$PATH"
export LD_LIBRARY_PATH="/usr/local/cuda/lib64:$LD_LIBRARY_PATH"


press ctrl O and press enter ( to write the file) and ctrl X and press enter to exit the editor

refresh environment

source ~/.bashrc


then add these environment commands


echo "/usr/local/cuda/lib64" | sudo tee -a /etc/ld.so.conf
sudo ldconfig



Install cuDNN 8.8.0


in your browser go to

https://developer.nvidia.com/cudnn


join the website

In order to download cuDNN, ensure you need to be registered for the NVIDIA Developer Program.


You will need to install google authenticator on your phone for this (iphone users can get it from the app store)

https://apps.apple.com/us/app/google-authenticator/id388497605

then join the NVIDIA developer program

the Nvidia developer site will have you scan a QR code to add it to the authenticator

https://developer.nvidia.com/login

and use the authenticator app to login for the security challenge ( you will enter a 6 digit number)



now go to the cuDNN download page


select download cuDNN library

agree to terms

select archived cuDNN releases under the recommended cuDNN

select Download cuDNN v8.8.0 (February 7th, 2023), for CUDA 11.x

it will drop down to the different versions

select Local Installer for Ubuntu22.04 x86_64 (Deb)

you will download this file:

cudnn-local-repo-ubuntu2204-8.8.0.121_1.0-1_amd64.deb

right click on properties and rename it to cudnn.deb

then unpack it

mkdir cudnn_install
mv cudnn.deb cudnn_install
cd cudnn_install
ar -x cudnn.deb

this will unzip new files, one of which is data.tar.xz

unzip that file

tar -xvf data.tar.xz

new folders will extract . Go to the var/cudnn-local-repo-ubuntu2204-8.8.0.121/ folder and install the libcudnn8.8.0 files



cd var/cudnn-local-repo-ubuntu2204-8.8.0.121/
sudo dpkg -i libcudnn8_8.8.0.121-1+cuda11.8_amd64.deb
sudo dpkg -i libcudnn8-dev_8.8.0.121-1+cuda11.8_amd64.deb
sudo dpkg -i libcudnn8-samples_8.8.0.121-1+cuda11.8_amd64.deb

the library files install to the /usr/lib/x86_64-linux-gnu/ folder



Install Tensorflow libraries

you need version 2.14.0 which works with CUDA 11.8. (2.12.0 and 2.13.0 also work). I got this info from this table: https://www.tensorflow.org/install/source#gpu
Posts that ask you to go to https://www.tensorflow.org/install/lang_c and select the linux GPU version will result in you downloading version 2.15.0 which will NOT work with CUDA 11.8. I changed the URL to 2.14.0 which is the last version that works with CUDA 11.8.

Now that I have explained this enter the following URL in your browser


hit enter and it will download

now install it and configure the environment so that PixInsight can see these Tensorflow library files and use them.

cd ~/Downloads
sudo tar -C /usr/local -xzf libtensorflow-gpu-linux-x86_64-2.14.0.tar.gz
sudo ldconfig /usr/local/lib



Install PixInsight for Linux

https://pixinsight.com/downloads/index.html
select software distribution
select the linux version
Download it
PI-linux-x64-1.8.9-2-20231019-c.tar.xz is the latest version as of this post.

to install:

tar -xf PI-linux-x64-1.8.9-2-20231019-c.tar.xz ( or whatever the latest version is)
sudo ./installer


Configure PixInsight

The tensorflow libraries that come with PixInsight are in /opt/PixInsight/bin/lib folder

Move them to a temp folder so that PixInsight uses the new Tensorflow libraries that work with CUDA 11.8

sudo mkdir /opt/temp
cd /opt/PixInsight/bin/lib
sudo mv libtensorflow* /opt/temp

now edit the .bashrc file again and add another environment variable

sudo nano ~/.bashrc

scroll to bottom and paste

export TF_FORCE_GPU_ALLOW_GROWTH="true"

then ctrl O enter to write and ctrl X to exit.

You are done !!

At this point you can use Ubuntu, but if you want the Kubuntu environment which is similar to Windows do this:

Install Kubuntu (KDE plasma desktop)

(I like the standard version)

sudo apt install kde-standard

During the long installation a screen will come up asking you to choose gdm3 or sddm. Select sddm.

after Kubuntu installed, I found an onscreen keyboard came up on boot. I had to select the keyboard up symbol to get past it to the login screen.

I disabled it by doing the following:

sudo nano /etc/sddm.conf

now copy and paste the following

InputMethod=

ctrl O and enter to write the file and ctrl X to exit

If all goes well StarXterminator will speed up. On my computer StarXterminator took 40 seconds without GPU acceleration and 13.9 seconds with GPU acceleration. PixInsight is also faster under Kubuntu vs Windows 11

Installation of future versions of PixInsight

Upon reinstalling PixInsight remove the tensorflow libraries that PixInsight creates. This ensures the 2.14.0 Tensorflow libraries continue to be used.

do not restart PixInsight until you enter the following:

sudo rm /opt/PixInsight/bin/lib/libtensorflow*

after doing this you can restart PixInsight and keep GPU acceleration
 
Last edited:
Thanks for this. I got it working on debian 12 a while back after losing all my hair and time. Then I decided to wipe my computer and reinstall recently, and didn't want to go through this again. When I found this it worked the first time. THANKS!

One thing to note, your hyperlink for tensorflow still points to 2.15 above. But changing it to 2.14 worked. It wasn't clear if your instructions implied that I still needed to change it after, or if the hyperlink is indeed textually listing 2.14, but linked to 2.15. Anyway, awesome job. It's hard to live without GPU after using it! I'm on a 3060 12GB and it is easily 4 to 5 times faster than my beefy 16core/32thread Ryzen.
 
Thanks for this. I got it working on debian 12 a while back after losing all my hair and time. Then I decided to wipe my computer and reinstall recently, and didn't want to go through this again. When I found this it worked the first time. THANKS!

One thing to note, your hyperlink for tensorflow still points to 2.15 above. But changing it to 2.14 worked. It wasn't clear if your instructions implied that I still needed to change it after, or if the hyperlink is indeed textually listing 2.14, but linked to 2.15. Anyway, awesome job. It's hard to live without GPU after using it! I'm on a 3060 12GB and it is easily 4 to 5 times faster than my beefy 16core/32thread Ryzen.
Thank you for your positive review and for pointing out the hyperlink in the tensorflow library section did not work properly. I have since replaced and tested the hyperlink and now version 2.14 of the tensorflow libraries downloads correctly. I am glad this guide was helpful to you!
 
Last edited:
Great guide! Also got it running on WSL (Ubuntu 22.04) with GPU acceleration. Used the latest driver and all seems to work fine:

1707073128594.png
 
Rob

Yes, it's worked for some time but I hadn't tried it recently; certainly not since the update to Qt6. I can create a separate thread linking to this one with the extra steps to set everything up under WSL. PxI is a lot faster in Linux as is well known and this extends to WSL. Main issue with WSL is the filesystem access speed which is pretty poorly handled. However, if my files are in WSL (which I can still access from within Windows), it's as fast as a native installation.

Roberto
 
Rob

Yes, it's worked for some time but I hadn't tried it recently; certainly not since the update to Qt6. I can create a separate thread linking to this one with the extra steps to set everything up under WSL. PxI is a lot faster in Linux as is well known and this extends to WSL. Main issue with WSL is the filesystem access speed which is pretty poorly handled. However, if my files are in WSL (which I can still access from within Windows), it's as fast as a native installation.

Roberto

that's really interesting that CPU performance is better. i have never studied how WSL works but i guess there must be some hypervisor that both windows and WSL sit on top of? i guess there must be some filesystem emulation going on which might account for the filesystem slow speed.

i do have one windows box but it can only run windows10 and i stopped messing with it when i stopped running Folding@Home. can WSL work on W10 or does it require W11?

rob
 
that's really interesting that CPU performance is better. i have never studied how WSL works but i guess there must be some hypervisor that both windows and WSL sit on top of? i guess there must be some filesystem emulation going on which might account for the filesystem slow speed.

i do have one windows box but it can only run windows10 and i stopped messing with it when i stopped running Folding@Home. can WSL work on W10 or does it require W11?

rob
WSL2 does run fine on Windows 10, but it requires manual installation (it is natively part of W11.)
 
I can create a separate thread linking to this one with the extra steps to set everything up under WSL.
Please do so, my last experience with WSL2 and Linux is 3 years ago for an installation of controlling a robot and motion planning.
I am really interested testing PI under an Linux environment.

Cheers
Tom
 
Thanks very much for this comprehensive guide!

There's a minor typo where we copy the tensorflow libraries to /usr/local. If you can still edit the post, please replace



with
Thank you for noticing that.

I replaced
sudo tar -C /usr/local -xzf libtensorflow-gpu-linux-x86_2.14.0.tar.gz
with
sudo tar -C /usr/local -xzf libtensorflow-gpu-linux-x86_64_2.14.0.tar.gz
 
Last edited:
Does it matter that the downloaded file is libtensorflow-gpu-linux-x86_64-2.14.0.tar.gz and the file that you want to unpack is sudo tar -C /usr/local -xzf libtensorflow-gpu-linux-x86_64_2.14.0.tar.gz.? Are they the same file?
libtensorflow-gpu-linux-x86_64-2.14.0.tar.gz
is the compressed tar file you download when you enter the url
https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-gpu-linux-x86_64-2.14.0.tar.gz
it needs to be extracted which is what the following line does :
sudo tar -C /usr/local -xzf libtensorflow-gpu-linux-x86_64_2.14.0.tar.gz
this extracts the tensorflow libraries and puts them in the /usr/local/lib directory
 
Last edited:
Hello
I have get this error when run sudo ldconfig /usr/local/lib
Code:
username@Inspiron-16-Plus-7620:~$ sudo ldconfig /usr/local/lib
/sbin/ldconfig.real: /usr/local/cuda/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8 is not a symbolic link

/sbin/ldconfig.real: /usr/local/cuda/targets/x86_64-linux/lib/libcudnn_adv_train.so.8 is not a symbolic link

/sbin/ldconfig.real: /usr/local/cuda/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8 is not a symbolic link

/sbin/ldconfig.real: /usr/local/cuda/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8 is not a symbolic link

/sbin/ldconfig.real: /usr/local/cuda/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8 is not a symbolic link

/sbin/ldconfig.real: /usr/local/cuda/targets/x86_64-linux/lib/libcudnn.so.8 is not a symbolic link

/sbin/ldconfig.real: /usr/local/cuda/targets/x86_64-linux/lib/libcudnn_ops_train.so.8 is not a symbolic link
also when I run Pixinsight via the terminal I have theses errors
Code:
 PixInsight

PixInsight Core 1.8.9-2 Ripley (x64)
Copyright (c) 2003-2024 Pleiades Astrophoto

2024-02-18 11:50:41.349570: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-02-18 11:50:55.891371: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-02-18 11:50:56.873604: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:357] MLIR V1 optimization pass is not enabled
do you know where this could come from?
thanks
 
Last edited:
Hello
I have get this error when run sudo ldconfig /usr/local/lib
Code:
username@Inspiron-16-Plus-7620:~$ sudo ldconfig /usr/local/lib
/sbin/ldconfig.real: /usr/local/cuda/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8 is not a symbolic link

/sbin/ldconfig.real: /usr/local/cuda/targets/x86_64-linux/lib/libcudnn_adv_train.so.8 is not a symbolic link

/sbin/ldconfig.real: /usr/local/cuda/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8 is not a symbolic link

/sbin/ldconfig.real: /usr/local/cuda/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8 is not a symbolic link

/sbin/ldconfig.real: /usr/local/cuda/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8 is not a symbolic link

/sbin/ldconfig.real: /usr/local/cuda/targets/x86_64-linux/lib/libcudnn.so.8 is not a symbolic link

/sbin/ldconfig.real: /usr/local/cuda/targets/x86_64-linux/lib/libcudnn_ops_train.so.8 is not a symbolic link
also when I run Pixinsight via the terminal I have theses errors
Code:
 PixInsight

PixInsight Core 1.8.9-2 Ripley (x64)
Copyright (c) 2003-2024 Pleiades Astrophoto

2024-02-18 11:50:41.349570: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-02-18 11:50:55.891371: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-02-18 11:50:56.873604: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:357] MLIR V1 optimization pass is not enabled
do you know where this could come from?

it looks like there might be duplicates of the above libcudnn files in the /usr/local/cuda/targets/x86_64-linux/lib directory.
take a look:
cd /usr/local/cuda/targets/x86_64-linux/lib/
ls libcudnn*
this will show all libcudnn files in the directory. Delete any duplicates.
see if that helps
 
it looks like there might be duplicates of the above libcudnn files in the /usr/local/cuda/targets/x86_64-linux/lib directory.
take a look:
cd /usr/local/cuda/targets/x86_64-linux/lib/
ls libcudnn*
this will show all libcudnn files in the directory. Delete any duplicates.
see if that helps
I get this
Code:
username@Inspiron-16-Plus-7620:~$ cd /usr/local/cuda/targets/x86_64-linux/lib/
username@Inspiron-16-Plus-7620:/usr/local/cuda/targets/x86_64-linux/lib$ ls
cmake                           libcudnn_adv_train.so.8         libcudnn_cnn_train_static.a     libcudnn.so
libcudadevrt.a                  libcudnn_adv_train.so.8.9.5     libcudnn_cnn_train_static_v8.a  libcudnn.so.8
libcudart.so                    libcudnn_adv_train_static.a     libcudnn_ops_infer.so           libcudnn.so.8.9.5
libcudart.so.11.0               libcudnn_adv_train_static_v8.a  libcudnn_ops_infer.so.8         libculibos.a
libcudart.so.11.8.89            libcudnn_cnn_infer.so           libcudnn_ops_infer.so.8.9.5     libOpenCL.so
libcudart_static.a              libcudnn_cnn_infer.so.8         libcudnn_ops_infer_static.a     libOpenCL.so.1
libcudnn_adv_infer.so           libcudnn_cnn_infer.so.8.9.5     libcudnn_ops_infer_static_v8.a  libOpenCL.so.1.0
libcudnn_adv_infer.so.8         libcudnn_cnn_infer_static.a     libcudnn_ops_train.so           libOpenCL.so.1.0.0
libcudnn_adv_infer.so.8.9.5     libcudnn_cnn_infer_static_v8.a  libcudnn_ops_train.so.8         stubs
libcudnn_adv_infer_static.a     libcudnn_cnn_train.so           libcudnn_ops_train.so.8.9.5
libcudnn_adv_infer_static_v8.a  libcudnn_cnn_train.so.8         libcudnn_ops_train_static.a
libcudnn_adv_train.so           libcudnn_cnn_train.so.8.9.5     libcudnn_ops_train_static_v8.a
i don't see any duplicates or do different versions count as duplicates (8, 8.9.5 ....) ???
thanks
 
Back
Top