Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - mcgillca

Pages: [1] 2 3 ... 6
General / Re: Brushstrokes in HA after batch-processing
« on: 2019 January 21 10:12:33 »
Dear Eric,

Hi - I have no idea what is wrong, but can offer you some thoughts on how to investigate further.

Start by examining your raw Ha files - do they show these features? If so, then look for something that has happened to your equipment - is there e.g. condensation on the filters (I know the Pro model is supposed to be immune from this, and indeed its much better, but I still have had the occasional problem)?

If not, try looking at the calibrated files - if so, then there is a problem with your master dark, bias or flat frames (most likely flats since the darks and master should be the same as your OIII image).

If the calibrated files are also OK, then it must be something odd in the integration step.

Good luck - its frustrating since otherwise the image does look good.


Dear Simon,

Hi - thanks for the reply. I did try this on my wife's Mac (I don't know how the reported benchmarks were configured). My default is to have four swap folder on the internal SSD - I've tried 4 to 8, and they all give much the same result.

I then went down to 1 folder, and the speed dropped from ~ 2.4 GB/S to 1.4GB/s, significantly slower but nearly twice speed of the reported Mac mini PI benchmarks.

I'm still puzzled by the RAM disk though - again, I had four swap locations enabled on the ram disk, and the speed was 1.5 GB/s, despite being a factor of two faster in Blackmagic.


Hi - I've been wondering about buying a Mac mini 2018 for Pixinsight use. However, looking at the benchmarks, I'm a bit surprised by the swap speed.

Users have reported Blackmagic reports disk read and write speeds for the 1TB disk at around 2.5GB/s (its a bit slower for write speeds for smaller disks), but the PI benchmarking data reports only 764 MB/s.

I was a little surprised since my wife's iMac with a 51GB SSD PI benchmark reports speeds of around 2.4GB/s, which is in line with the Blackmagic speed test.

I then experimented with a RAM disk on my wife's computer - Blackmagic reported about 5GB/s, whilst the PI benchmarking reported only about 1.5GB/s.

There is clearly something more complicated going on here since I would have expected better results with the RAM disk.

In each case, I used 4 temp file instances, so this shouldn't have changed.

Any thoughts?


General / Re: Inside Pixinsight - Second Edition Availibility
« on: 2018 December 21 04:17:57 »
Hi - I bought mine on the Springer site. At the moment they have a deal too, so if you buy a second book you will get £30 off. I got two books for less than the list price of PI v2.

The site says that this is printed on demand and will take 8-10 days to arrive. Alternatively, you can buy the ebook version now.


General / Re: Inside PixInsight the Book
« on: 2018 December 18 13:41:14 »
Hi - Springer have an offer on at the moment - if you select books for more than £35, you'll get a discount of up to £30 on your books.

So, I was able to get Warren's new book (published on demand according to the website?) plus another in the Patrick Moore series for just £18.

I assume this also works in mainland Europe and possibly the US as well?


General / Re: Blink and SubFrame Selector
« on: 2018 December 11 02:35:44 »
Dear Jon,

Hi - I personally run both before I do any pre-processing at all. I think the majority of the things I test for (FWHM, eccentricity, median value) are largely independent of calibration (the exact value of too high a median value will change, but since this is mainly to take out frames to close to dawn or near the moon, this isn't a big deal).

I will then take a quick look using Blink at the remaining subs to make sure there are no real problems with the remaining subs (I often remove subs with satellite trails etc since I generally have sufficient data to be choosy, but this is less problematic with large scale removal).

Doing it this way typically removes something like 30% of my subs, giving less work to do in preprocessing (I image remotely and frequently have 30 plus hours of data on an object).

Hope this helps,


General / Re: Non-linear Dark current
« on: 2018 November 30 10:44:09 »
Thanks, Rick - as I understand it, with the Pro model, the image is downloaded into a DDR so the amp glow is significantly reduced (it seems to be only turned on during reading the image, not during the imaging itself). That was one reason ASI introduced the memory - people were reporting significant amp glow with USB 2, but much less with USB 3 since it was a much faster download.

I should plot up my data and confirm it actually behaves that way!


General / Re: Non-linear Dark current
« on: 2018 November 29 08:56:11 »
Dear Markus,

Hi - I measured my dark current for this camera recently as well.

What is a little odd is that there appears to be amp glow when reading an image, but not in the bias frames (not sure why).

So my dark current (as measured by difference between bias frame and dark frame), scaled as a constant plus a linear term. The linear term was very close to that reported by ASI on their web site.

Not sure why your's is exponential. Could you post your results as a graph?


General / Re: Unmaximising the real time preview window
« on: 2018 November 20 08:28:02 »
Thanks, Steve - those buttons don't seem to be there on my machine, as you can see from the screen shot. They do seem to appear if I maximise the preview deliberately, but if I do it by accident (some strange movement of the window that Windows 10 interprets as a request to maximise the window - I still haven't worked out what this is), they are not there.

I will keep looking for them and try to work out what the movement is so I can avoid it!


General / Re: Unmaximising the real time preview window
« on: 2018 November 15 04:39:33 »
Image of maximised RTP - as you can see, no controls (that I can see anyway!).


General / Re: Unmaximising the real time preview window
« on: 2018 November 15 04:28:25 »
Thanks - but this is a Pixinsight window, not a windows one!

Try it - maximise the real time preview window and see if you can shut it - when maximised, the normal controls do not appear (iconise, shade, maximise, close). This is actually true for any image window if you maximise it - (instead these controls appear in the top right of the screen) and there are also controls on the bottom left (e.g. zoom to optimal fit), or cascade windows which restore the geometry.

None of these methods works for the real time preview window.


General / Unmaximising the real time preview window
« on: 2018 November 15 02:25:45 »
Several times recently, I've found I've unintentionally maximised the real time preview window.

When this happens, i haven't been able to find a way to either close the window or resize it.

The best I've been able to do is to move the red line (dividing image area from icons) to the right to get more room for images behind.

Anyone know how to either close or resize the real time preview window when this happens?

Thank you,


General / Re: Combining Images from 2 Different cameras
« on: 2018 May 16 08:33:32 »
Dear Steve,

Hi - I've done some experimenting with this in trying to combine data from three different scopes, each with their own cameras etc. I found that Image Integration was NOT the best way to combine the data.

Of the three cameras we had, one had a much lower gain (ADU/electron), and hence a lower signal. Image Integration gave those frames a much higher weight than the other two, despite the fact that the S/N was much the same.

What seemed to be happening was that since the signal was lower, the variance in the signal was a lower, which II took to mean that the image was higher quality (more photons) and hence increased the weight of those subs. This is just a guess, but even if this is not correct, the effect was the same - we were getting too high a weight from the cameras with the lower ADU/electron.

Instead, we developed an alternative approach - the idea is to make sure we equally weight each counted photon (electron). With this approach you don't have to take into account differences in exposure length, or quantum sensitivity since these show up in the counted photons in each sub. However, you do need to take account of the number of subs, the different gains, and the size of the sensors (described by the image scale, arcsec/pixel).

The last one is interesting and a side-effect of the way PI deals with aligning images with different scales. Imagine one camera has a scale of 2"/pixel, the second 1" per pixel and you have aligned the 2"/pixel to the 1" image (which preserves the detail where it exists). The 1"/pixel may have captured 10 photons. All other things being equal, the 2"/arcsec pixel would have captured 40 photons. When aligned to the 1"/pixel image, PI will effectively create 4 sub-pixels, but all with the a signal level corresponding to 40 photons (PI uses a form of interpolation). What this means is that you have to divide by the area of the pixel to take out this effect.

So our process was:
1) Everybody stacked their image with their own flats, lights, darks etc. as required, optimised their own stacking and produced a master.
2) The masters were then aligned to the highest resolution image so as not to lose the fine scale data where we had it.
3) We then combined the images in pixelmath, with a weight equal to: Nsubs/gain/PixelArea

Nsubs - the master contains the "average" number of ADU per sub - multiplying by Nsubs gives you the total ADU detected.
gain - in ADU/electron - dividing the signal by the gain converts to electrons and hence detected photons (if you have a gain in electrons/adu, then multiply by this instead).
PixelArea  - as discussed above. Don't forget to take account of binning (if not 1x1) in determining the area.

This definitely produced better images for us, though since we all processed the images separately, it was obvious that the skill of the processor was more important than the marginal gain we got from optimising the stacking process (and sadly, our best processor used PhotoShop and couldn't really tell us how to improve our processing in PI!).

Hope this helps,


Release Information / Re: INDIClient 1.0.15: New INDIMount Tool
« on: 2018 April 11 07:41:40 »
Vicent has raised my attention to the UDOO x86 project:

It seems that this device will be able to run PixInsight on Linux without any problem. As soon as we can get one, I'll start making tests. With one of these, you can run an INDI server and PixInsight as a client on the same machine, then connect with a laptop through a wireless network when necessary, just to point the telescope and start an acquisition sequence. Very promising...

Hi - I got one of these last week. I haven't tried it under the stars yet (weather poor), but I have used it to run The Sky X with simulated mount, camera etc - works very well and takes about 5W - a little more than the 3-4W for the Pi and Odroid XU4, but not signifcantly.

PixInsight seems to run fine - my board is the advanced plus with 4GB and I have an M.2 SSD. Timing a few things it takes about 5 times as long as on my desktop PC, which has a PixInsight benchmark score of about 6000.

I don't plan to use the Indi driver with this (I wrote a couple of X2 drivers so that I could use my equipment with a Pi running TSX), but I do think it would work well with INDI and PixInsight.


Bug Reports / Re: Flat correction in One Shot colour work flow?
« on: 2017 December 31 06:07:59 »
Well, that looks a lot better than mine, even with the median corrected flats. I'll take a look at your settings and see what I did wrong. As I said, superior skill  :) :)

Thank you very much, Rob!


Pages: [1] 2 3 ... 6