Is it possible to accelerate StarNet with GPU in linux?

airscottdenning

Well-known member
Is it possible to use a GPU-accelerated StarNet process in PI under linux?

All the instructions I've found in the forums have been for windows. Obviously the Windows registry hacks will not work under linux.

Has anybody done this? How do you set it up?
 
Last edited:
Given the lack of replies, I guess the answer is no.

So even though linux is the "reference" platform for PI, it may be that Windows is preferred?
 
StarNet is not part of PixInsight. It is a (cute but not very useful) third party process distributed with PixInsight.
 
Starnet is based on tensorflow. Most of the instructions are related to install tensorflow in python. I think that you need to find some library to be used (libtensorflow.so ?) or you have to compile it by yourself. I'm not sure under linux but for windows I compiled a tensorflow dll with NVIDIA support but without CPU support (I have a very old CPU without AVX but a quite decente NVIDIA card) and now I can use Starnet ...
 
generally that's how it works. on unix-like systems there is a hierarchy of places where the dynamic loader knows to look for libraries. the StarNet module (on OSX anyway) tries to load tensorflow from @loader_path/libtensorflow.2.dylib and in the end @loader_path resolves to ..../PixInsight/bin/, but there are probably other directories where you could place the tensorflow libraries and PI would be able to pick them up. i'm sure linux is similar enough that if you just substituted the right .so files for tensorflow that StarNet would just then run on the GPU. that's the beauty of abstraction!

rob
 
yeah seriously. i have a bunch of cards i bought in 2020 to run protein folding simulations on covid and i really should sell them right now!
 
Maybe in the future PixInsight will be distributed with a three tensorflow dll: one for vintage processor with CUDA, other one for not so old CPU and one for not so old CPU with CUDA ... :LOL::ROFLMAO:
 
Maybe in the future PixInsight will be distributed with a three tensorflow dll: one for vintage processor with CUDA, other one for not so old CPU and one for not so old CPU with CUDA ... :LOL::ROFLMAO:

Or with a new and different method for star removal, probably having a less sexy name :LOL:. If I remember well, there were thoughts in the past on using total generalized variation for the task of image inpainting, but I don't remember anything about how the star detection could be performed in a robust manner. We will see?
 
here is how I did it

Screenshot from 2023-03-24 10-40-52.png
 
Back
Top