Author Topic: PixInsight Benchmark  (Read 64237 times)

Offline Nocturnal

  • PixInsight Jedi Council Member
  • *******
  • Posts: 2727
    • http://www.carpephoton.com
Re: PixInsight Benchmark
« Reply #30 on: 2014 May 13 12:34:30 »
I'm getting wildly varying numbers on my laptop. To the point where submission makes little sense. I have of course disabled all software that I can. Perhaps it is related to having 'only' 6 GB of RAM. I think a second flavor of the benchmark more representative of amateur use with say 6 MP images would be nice. I've never run into a RAM issue on my laptop with PI and so I think having this rather arbitrary high barrier to entry does PI a disservice. It runs very well on much lighter systems and I think that the scalability is one of its strong points.
Best,

    Sander
---
Edge HD 1100
QHY-8 for imaging, IMG0H mono for guiding, video cameras for occulations
ASI224, QHY5L-IIc
HyperStar3
WO-M110ED+FR-III/TRF-2008
Takahashi EM-400
PIxInsight, DeepSkyStacker, PHD, Nebulosity

Offline GaryP

  • Member
  • *
  • Posts: 72
    • Astroimaging Log
Re: PixInsight Benchmark
« Reply #31 on: 2014 May 13 13:13:59 »
Hi, see this

The video is very nice and I have watched it three times now, but some further explanation would be helpful.

It appears that one should create a subfolder "SwapPixInsight" within each data folder. I am guessing that the data folders need not be located in any particular folder on the drive. Then one adds that folder to PixInsight's Preferences --> "Directories and Network" window. One will then get parallel swapping, which will be superior to swapping all subfolders in /tmp. Is that correct?
PI 01.08.01.1092 on 4GB iMac w. Mavericks, Canon T1i DSLR, William Optics 110mm APO FL770, WO focal reducer (at 73.5 mm), CGEM

Offline Alejandro Tombolini

  • PTeam Member
  • PixInsight Jedi
  • *****
  • Posts: 1267
    • Próxima Sur
Re: PixInsight Benchmark
« Reply #32 on: 2014 May 13 15:53:44 »
Hi Gary, not sure if I understand well, but have added a note in the text of the video.

Remember that you can choose only one folder for each physical unit.

Saludos. Alejandro.

Offline martin farmer

  • Newcomer
  • Posts: 32
Re: PixInsight Benchmark
« Reply #33 on: 2014 May 13 23:40:06 »
Hello All,

Like so many now I too have run the benchmark and submitted the results.
I have Pixinsight on my observatory PC which is a Windows 7 machine (benchmark from that submitted too) and my laptop which is a MacBook Pro. The results from both of my installations are quite a long way apart the PC is slower but is helped by the SSD that's in there so the result is reasonable.
The MacBook has a far faster processor so the benchmark is higher but let down in the ultimate figure by the swap file.

So my question to fellow mac users is:- what's the best (optimum) settings to improve Pixinsight on a mac.
I don't want to mess about myself in case I cause problems but if some wise sage can assist then that's great.  The hard disk drive is not a SSD unfortunately.

Kind regards
Martin

Offline Conor

  • Member
  • *
  • Posts: 73
Re: PixInsight Benchmark
« Reply #34 on: 2014 May 14 04:01:00 »
There is no Benchmark entry in the Script menu on my install of PixInsight.

I'm running 1.08.01.1092 here on FreeBSD x64.

Edit: See attached image.

Second Edit: It's installed, just not showing in the menu. I've been able to navigate to it and execute it manually.
Takahashi FSQ 106 EDX III w/ QE Reducer
William Optics 110FLT Apo Triplet
William Optics Megrez 72 Apo Doublet
iOptron CEM60
QSI 583ws & 3nm Ha/OIII/SII filters
SBIG ST-i
Trying to use PixInsight on FreeBSD

Offline georg.viehoever

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2132
Re: PixInsight Benchmark
« Reply #35 on: 2014 May 15 01:49:59 »
I'm getting wildly varying numbers on my laptop. To the point where submission makes little sense. I have of course disabled all software that I can. Perhaps it is related to having 'only' 6 GB of RAM. I think a second flavor of the benchmark more representative of amateur use with say 6 MP images would be nice. I've never run into a RAM issue on my laptop with PI and so I think having this rather arbitrary high barrier to entry does PI a disservice. It runs very well on much lighter systems and I think that the scalability is one of its strong points.
I would also like to be able to produce useful benchmark numbers with "moderate" amounts of RAM. My main system is still a 4GB Win7 laptop, and for most PI operations it is still sufficient. On this system, the benchmark runs forever and produces wildly varying numbers (factor 3 or so).
Georg
Georg (6 inch Newton, unmodified Canon EOS40D+80D, unguided EQ5 mount)

Offline GaryP

  • Member
  • *
  • Posts: 72
    • Astroimaging Log
Re: PixInsight Benchmark
« Reply #36 on: 2014 May 15 07:38:34 »
I agree with the populist sentiments express by Georg and Nocturnal. I would also like to understand how PixInsight uses memory. When I attempted to integrate 72 calibrated lights (about 1.5 GB in their original CR2 format at 21 MB per frame), it brought PixInsight to a halt. I could run to completion by dividing the work in two. Now 1.5 GB is only a portion of the 4GB in my iMac and I had shut down all other programs except the Activity Monitor. Why is it that PixInsight needs more than 4 GB for this task? I suppose the original 1.5 GB became at least 3 GB for 32 or 64 bit processing. But it seems that PI must be making several copies of each frame and holding them all in the RAM at the same time. Then it must have to page out some large percentage of the needed space. How much space on the hard drive would this require?

In any event, I am going to upgrade to 16 GB. In a couple of years a new generation of laptops and desktops will make this problem go away, unless Juan and the other programmers fiendishly devise irresistible new processes and scripts that require even more memory.

I'm getting wildly varying numbers on my laptop. To the point where submission makes little sense. I have of course disabled all software that I can. Perhaps it is related to having 'only' 6 GB of RAM. I think a second flavor of the benchmark more representative of amateur use with say 6 MP images would be nice. I've never run into a RAM issue on my laptop with PI and so I think having this rather arbitrary high barrier to entry does PI a disservice. It runs very well on much lighter systems and I think that the scalability is one of its strong points.
I would also like to be able to produce useful benchmark numbers with "moderate" amounts of RAM. My main system is still a 4GB Win7 laptop, and for most PI operations it is still sufficient. On this system, the benchmark runs forever and produces wildly varying numbers (factor 3 or so).
Georg
« Last Edit: 2014 May 15 07:59:19 by GaryP »
PI 01.08.01.1092 on 4GB iMac w. Mavericks, Canon T1i DSLR, William Optics 110mm APO FL770, WO focal reducer (at 73.5 mm), CGEM

Offline pfile

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 4729
Re: PixInsight Benchmark
« Reply #37 on: 2014 May 15 08:09:03 »
PI has all the files being integrated open at the same time. this means they are mapped into memory… big fits files then mean a lot of memory. on OSX there's a ~200 file open file limit, which probably means that the memory management in the darwin kernel is not so hot. linux probably does a heck of a lot better in this regard.

Offline bitli

  • PTeam Member
  • PixInsight Guru
  • ****
  • Posts: 513
Re: PixInsight Benchmark
« Reply #38 on: 2014 May 15 08:35:37 »
Although I understand the interest of some users for the performance of lower end computers, I think that the time of Juan would be best spent in some other area (assuming that it would take some time to manage multiple variants of the benchmark).
When memory is scarce, the result may be highly dependent on the exact configuration and usage. The users may have wrong expectations and require more support to understand why they cannot reproduce the performance of the benchmark.
For example if you read CR2 files, ImageIntegration must read them fully (they must be decompressed). Using FITS files, the files can be read partially (this is why all the files are open at the same time, this would not be needed if they where all in memory).  This causes different trade-offs regarding memory usage vs file descriptor usage vs performance vs ease of use (configuration of stack size). It is difficult to understand the trade-offs, and not very useful.  If you do not have enough memory, your problem is lack of memory.  No need to benchmark it.

-- bitli


Offline NGC7789

  • PixInsight Old Hand
  • ****
  • Posts: 391
Re: PixInsight Benchmark
« Reply #39 on: 2014 May 16 19:25:39 »
So after reading some of the posts about ram disks I thought I'd give it a try. I only have 16GB total ram but only about 1.5 seemed to be used as "file cache" according to the Memory Clean app. So I made a 4GB ram disk as set it for swap along my SSD. My swap score went from 3101 to 7084 and my overall went from 5656 to 6983!

So my question is are these results real? Is a ram disk the way to offset OS X's inferior file caching as compared to Linux? Should I be adding another 16GB ram to devote to a larger ram disk? Or would a second SSD give similar results?

Offline GaryP

  • Member
  • *
  • Posts: 72
    • Astroimaging Log
Re: PixInsight Benchmark
« Reply #40 on: 2014 May 16 20:32:04 »
So I made a 4GB ram disk as set it for swap along my SSD.

NGC7788, could you clarify that? I'm trying to understand how one might best use a solid state drive.
PI 01.08.01.1092 on 4GB iMac w. Mavericks, Canon T1i DSLR, William Optics 110mm APO FL770, WO focal reducer (at 73.5 mm), CGEM

Offline NGC7789

  • PixInsight Old Hand
  • ****
  • Posts: 391
Re: PixInsight Benchmark
« Reply #41 on: 2014 May 17 04:05:12 »
Quote
could you clarify that?

I created a ram disk using the instructions here http://www.tekrevue.com/tip/how-to-create-a-4gbs-ram-disk-in-mac-os-x/

Then I added the ram disk to the swap locations so both my SSD and the ram disk were being used.

As far as the best use of SSD, I'm trying to understand that too. Based on the benchmarks it would appear that even a small ram disk with an SSD is of great benefit. I don't know what you would get with two SSDs (I only have one so I can't try it). Or for that matter two SSDs plus a ram disk!

What's also not clear to me are the limitations of the obviously much smaller ram disk. Even if I got more ram and it could be 8 or 16 gb it's still much smaller than the 200+GB available on the SSD. If I added a second SSD that space would effectively double. Obviously a small ram disk is enough for the benchmark but I'm guessing that in the real world there must be many situations where it's not enough.

And a ram disk is based on the assumption that the OS and/or the application are not using the ram effectively. A ram disk helps swap performance but you are taking the ram away from the OS and the app.

-Josh
« Last Edit: 2014 May 17 04:19:52 by NGC7789 »

Offline GaryP

  • Member
  • *
  • Posts: 72
    • Astroimaging Log
Re: PixInsight Benchmark
« Reply #42 on: 2014 May 17 07:47:01 »
Josh, thanks. @pfile said

PI has all the files being integrated open at the same time. this means they are mapped into memory… big fits files then mean a lot of memory. ...

so I found it hard to understand how a RAM disk would improve on keeping the files in memory, but I see that you mention an inferior file caching system with OS X. Since you got a big performance boost, I will give it a try after I upgrade the RAM.
PI 01.08.01.1092 on 4GB iMac w. Mavericks, Canon T1i DSLR, William Optics 110mm APO FL770, WO focal reducer (at 73.5 mm), CGEM

Offline NGC7789

  • PixInsight Old Hand
  • ****
  • Posts: 391
Re: PixInsight Benchmark
« Reply #43 on: 2014 May 17 07:57:57 »
I see there are several configuration options with varying cost/benefit. I'd love to hear the bigwigs weigh in on the best way to go.

1. Give all ram to OS and PI. Swap to SSD.

2. Use some ram for swap ram disk along with SSD.

3. If we are upgrading ram should we give it all to OS and PI or devote some to ram disk

4. If we are upgrading should we be adding second SSD or ram for ram disk (or OS/PI).

Offline slang

  • Member
  • *
  • Posts: 60
Re: PixInsight Benchmark
« Reply #44 on: 2014 May 18 04:49:45 »
Hiya.

This is indeed a fascinating topic, and very timely for me. I recently acquired (hand me down) a new PC with quad core 3.07GHz with 24GByte of RAM and a stack of WD Raptor drives. Very lucky and very grateful am I. I'm (obviously, as there is no greater need) dedicating this for PI image processing.

So, I'm following these threads and understanding cpu, memory and disk tradeoff. It seems that a really good idea (if you have a bucketload of RAM) is to keep the files in RAM, or at least the temp/scratch files. I use Linux (ubuntu 12.04), and linux has some 'knobs' to tweak in this regards.

I haven't looked at setting up a RAM disk - that seems a bit coarse, but entirely do-able, I wanted something a bit smarter than a ram disk.

What I have done is investigate some kernel tuning parameters in /etc/sysctl.conf. There is a lot of information on this around the net, but http://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/ is a good description.

WARNING: This can be dangerous - do your own research, your mileage may vary, and no-one else will be accountable for any negative results.

I have set these as follows;
vm.dirty_background_ratio = 80
vm.dirty_ratio = 80
vm.dirty_writeback_centisecs = 1500

Descriptions;
vm.dirty_background_ratio is the percentage of system memory that can be filled with “dirty” pages — memory pages that still need to be written to disk
vm.dirty_ratio is the absolute maximum amount of system memory that can be filled with dirty pages before everything must get committed to disk
vm.dirty_expire_centisecs is how long something can be in cache before it needs to be written

So, what this actually means is that the kernel will set aside up to 80% of RAM (in my case = ~19GBytes) for disk cache. When it gets full, it WILL be written to disk, files can reside in memory for up to 15 seconds before the WILL be written to disk.

Obviously, there is a huge risk with this - a power outage will cause potential large loss of files. This isn't an issue (generally) for PI temp files, but these settings are system wide, so if your system is doing other stuff...... Caveat Emptor as my economics teacher used to say.

Anyway. I don't profess to be an expert at all, it has been 15 years since I've attempted to tune a kernel to this degree, but the results are quite stunning in my case, and the PI benchmark is exceptionally useful for understanding benefits of system tuning.

Now I have 3 x 146GByte WD Raptor drives (10,000 rpm) each dedicated for temp storage, with ext4 filesystem, with journals, dirtime, atime and something else turned off, so they're pretty quick by themselves. The benchmark results are;

Standard system;
Execution Times
Total time ............. 01:46.11
CPU time ............... 01:24.50
Swap time .............. 00:21.57
Swap transfer rate ..... 768.381 MiB/s

Performance Indices
Total performance ...... 4433
CPU performance ........ 4480
Swap performance ....... 4256


With Kernel tuning (as above);

Execution Times
Total time ............. 01:27.97
CPU time ............... 01:23.02
Swap time .............. 00:04.90
Swap transfer rate ..... 3385.359 MiB/s

Performance Indices
Total performance ......  5347
CPU performance ........  4559
Swap performance ....... 18750


So, that's a massive improvement in performance by letting the system use much of RAM for disk cache. (From memory, if the system requires the RAM for applications, it will take precedence over this disk caching...)

Some notes:
* Interestingly, despite putting these in /etc/sysctl.conf, they do not persist across a reboot - I need to issue sysctl -p for them to take effect. Must investigate that.
* Whilst this does start to look like tuning to beat a benchmark, I have tested this config in some real world tests. Running BatchPreProcessing on 25 x .cr2 files (12.2MP), including integration (but using master bias/dark etc.) saw a massive improvement. Monitoring actual disk i/o at the time showed that there was NO disk i/o during this process. This is a typical set of my inputs, so I call this close to 'real world' for me.
* When the cache does get full, or the system does decide to actually flush the stuff, you may be in for a wait - that's the trade-off/risk here.
* This config seems to beat using SSD's in my case, although not that many desktops would support 24GByte of RAM, so maybe this is academic/theoretical?
* ext4 without journalling and mounted with defaults,data=writeback,noatime,nodiratime also seems to make a pretty decent difference
* The results can be a little inconsistent, sometimes the benchmark will result in ZERO actual disk i/o, other times there will be a little bit here and there
* Oh, I commissioned an old UPS I had lying around - shoudl give between 30 seconds and 5 minutes of protection ;-)
* Also acquired some caching SATA/SAS controllers which should help with speed when something actually needs to hit a disk platter, but haven't bothered yet - why would I, given these results?

Is anyone else doing anything similar? How dangerous is this? Am I mad? (OK, don't answer the last one...)

Cheers -
--
Mounts: Orion Atlas 10 eq-g, Explore Scientific G11-PMC8
Scopes: GSO RC8, Astrophysics CCDT67, ES FCD100-80, TSFLAT2
Guiding: ST80/QHY OAG/QHY5L-II-M
Cameras: Canon EOS 450D (IR Mod), QHY8L, QHY163m/QHYFW2-US/Astronomik LRGBHaSiiOii