Support for new Apple M1 "System-on-Chip" Processors

Ok, that works. I had already checked that the quarantine bit wasn't set on the .so files, but when I had renamed them to the .dylib files, I didn't check again and they were set. I guess quarantine flags isn't normally set of .so files, so the renaming activated them. In any event, it seems to work. Has anyone tried Apple's metal optimized tensorflow?

Alex
 
i was thinking about trying that... the problem is that all the build instructions on tensorflow.org are directed at the python libraries. so i'm not actually sure how to build the C API libraries for tensorflow that can accept that plugin.

or did apple actually provide complete dylibs with metal?
 
i was thinking about trying that... the problem is that all the build instructions on tensorflow.org are directed at the python libraries. so i'm not actually sure how to build the C API libraries for tensorflow that can accept that plugin.

or did apple actually provide complete dylibs with metal?

I tried following Apple's instructions here: https://developer.apple.com/metal/tensorflow-plugin/

Thing is, I have one of the new Macbook Pro Maxes, and the procedure sucked down the native M1 version. Installing that version in Pixinsight didn't work. I don't think you can mix x86 and AMD binaries like that. I'm not sure how to get the Intel version of the bits installed as I gave my old Macbook Pro away.

Alex
 
TF-metal would work on any mac running monterey and take advantage of whatever video card you happen to have. see the thing is, although juan touched up the code, StarNet is contributed software and nikita (the author) is not a mac programmer. somewhere along the line i built the macosx version of the original starnet module for him. juan has a lot on his plate with just the core PI application so i'm not sure he'd find the time to take up the cause of a platform-specific accelerator.
 
TF-metal would work on any mac running monterey and take advantage of whatever video card you happen to have. see the thing is, although juan touched up the code, StarNet is contributed software and nikita (the author) is not a mac programmer. somewhere along the line i built the macosx version of the original starnet module for him. juan has a lot on his plate with just the core PI application so i'm not sure he'd find the time to take up the cause of a platform-specific accelerator.
Sorry for that post, this shows my complete ignorance about programming and software development... I can imagine how demanding it must be to develop a powerful app, that thousands of people have adopted, and are still asking for more... while at the same time technology is developing at great speed... Juan and his team are doing a colossal job, and so much appreciated... I'm going to delete the post... a bit embarrassing...
 
no don't be embarrassed at all! i'm just saying that a lot of work on PI is user-driven and this is one of those cases. juan was very gracious to fix up the UI for StarNet to make it a little more PI-like, but i don't know if he has the time to continue working on it.
 
Hi Rob,

Thank you VERY MUCH for your help getting the StarNet process running in PixInsight on the M1 Mac. I've now got the StarNet process working perfectly in PI 1.8.8-11 on my M1 MacBook Air (8 GB RAM) and it feels faster than my 2017 Intel Mac mini (6 cores/12 threads with 64 GB RAM). Amazing!

In case anybody else is looking for this, here's the entire step-by-step:
================
Install PixInsight

Use Pacifist to extract StarNet-pxm.dylib from the PI installer pkg
sudo cp StarNet-pxm.dylib /Applications/PixInsight/bin

download & uncompress standalone StarNet++ from https://sourceforge.net/projects/starnet/

sudo mv /Applications/PixInsight/bin/libtensorflow* ~/someplace-else
sudo cp ~/Downloads/StarNet_MacOS/libtensorflow.so /Applications/PixInsight/bin/libtensorflow.2.dylib
sudo cp ~/Downloads/StarNet_MacOS/libtensorflow_framework.so /Applications/PixInsight/bin/libtensorflow_framework.2.dylib
sudo xattr -c /Applications/PixInsight/bin/StarNet-pxm.dylib /Applications/PixInsight/bin/libtensorflow*
sudo cp ~/Downloads/StarNet_MacOS/*.pb /Applications/PixInsight/lib.
sudo xattr -c /Applications/PixInsight/lib/*.pb

Start PixInsight
Processes -> Modules -> (check recursive) -> Search -> Install
=================

Again, deep gratitude!!
Scott Denning
 
Last edited:
I followed these step-by-step instructions just now to get StarNet to work on my M1 MacBook but when I am finished and restart PI and do the search, my PI crashes each and every time.
 
It’s been more than a year since this thread started. The new M1 Ultra was just announced. Curious if any progress has been made towards M1 support at this point? I just saw a post on the APP site that native M1 support is now underway for their next version.
 
I am curious as well Lead_weight. I am sure it is a huge process to write a native version for M1 Macs but apple silicon is the future so I hope some serious progress has been made to make this change/addition. By years end the switch from intel to apple silicon will be complete and it would be shame not to be able to use these machines for their full potential especially for a paid program.
 
I am about to order the new Mac Studio, and also curious about what to do with M1 Max running on it. If PixInsight does not work at all for M1 Max I will postpone a while.
 
I'm curious about this answer as well. I am ready to purchase an M1 Ultra Mac Studio, but I not if PI isn't going to work. I would greatly appreciate some direction, no matter the answer.

Thank you!
 
PI runs very well under rosette 2 on M1* Macs, in fact it is quite fast. The problem is that these machines have a lot of potential that is not being exploited in the absence of a native version.


*Starnet does not work, version 2 does, but according to several reports, it is complicated to install.
 
Marcelofig...Thank you very much for your reply!

I'm not sure if you can answer this or if anyone has an answer, but how does the performance of PI compare on an M1 Ultra with Rosetta vs a well equipped Linux workstation.

I appreciate any feedback greatly.
 
Just sharing my $0.02 - I am planning to get an M1 Ultra Mac Studio as well, and I know that PixInsight only runs under emulation but, even then, it's quite fast as compared to my i7-7700 slow crawler. So if and when a native version comes around, the Mac Studio will be lightning fast. And if it does not, I think it'll still be very fast, while I enjoy it sometimes with Photoshop or Affinity Photo or APP.
 
Marcelofig...Thank you very much for your reply!

I'm not sure if you can answer this or if anyone has an answer, but how does the performance of PI compare on an M1 Ultra with Rosetta vs a well equipped Linux workstation.

I appreciate any feedback greatly.

Thank you.

I am not a Linux user, but I know that this is the favorite operating system of the developers. You can take a look at the faqs, in Operating Systems:

 
Early benchmarks show the M1 Ultra 20-core cpu is comparable to the AMD Threadripper 64-core. That's insane. The Threadripper is $4,000 just for the CPU, so quite a good comparison for price/performance.
 
Early benchmarks show the M1 Ultra 20-core cpu is comparable to the AMD Threadripper 64-core. That's insane. The Threadripper is $4,000 just for the CPU, so quite a good comparison for price/performance.

"Comparable" is a strong word to use for an isolated benchmark, especially as far as Pixinsight is concerned. This same benchmark also shows the i9-12900k being 37% faster than the 5900x for multi-core, yet for PI the 5900x benchmarks higher. I'll hold my opinion until I see a real benchmark.
 
Early benchmarks show the M1 Ultra 20-core cpu is comparable to the AMD Threadripper 64-core. That's insane. The Threadripper is $4,000 just for the CPU, so quite a good comparison for price/performance.
This is my benchmark with M1 Max. I think we can hope the double of the performances if UltraFusion technology works as described but Threadripper 64-core is still faster.
 
Back
Top