Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - MortenBalling

Pages: [1]
General / Integration using FWHM weights?
« on: 2014 December 22 18:07:19 »
Hi All

Merry Xmas!

I've been experimenting with something I call "Crowd Imaging". The idea is to take collaborations to the max. I try to do so, by integrating several hundreds of Creative Commons licensed astro photos into one stack with very high SNR. All that works fine.

Lately I've been experimenting with the SubFrameSelector script, sorting out half of the images with the lowest FWHM, and then stacking those separately. That also works perfectly.

However, I've seen the possibility to add fits keywords using SFS. Instead of sorting, I'd like to add FWHM to each file using SFS, and then integrate all files with FWHM weight.

Is that possible, and how do I do that?

I know how to select a fits keyword for weights in ImageIntegration, but by using FWHM, won't PI weigh the files with the highest FWHM higher than the ones with low FWHM?

Thanks in advance.

Best regards


Ps. If you wan't to see some examples of the Crowd Image method you can see more here:

Tutorials and Processing Examples / Cheap trick / Starmasks
« on: 2014 December 19 06:13:09 »
I've always found it hard to make a proper starmask, including the larger stars (scale>7), but now I've found a solution:

Clone the image your working with.

Resample it to 50% scale.

Make a starmask (a lot easier).

Resample the starmask to 200% scale.



Morten :)

General / What is MAD?
« on: 2014 November 06 05:26:58 »
In the Statistics process, there is a value called MAD. Is that Median absolute deviation?

Thanks in advance


General / Colour channel hot keys?
« on: 2014 September 28 05:11:32 »
A quick question:

Is there a hot key to quickly switch/blink between eg. Red and Blue channels, to look for possible QSO's (no blue signal)?

(You can use the dropdown menu, but that not very quick)

Thanks in advance


General / Alpha Channels
« on: 2014 September 19 05:26:37 »
Hi all

I'm currently working with combining large amounts of astro images (jpg) into one image. You can read more about it here:

Using starAlign I enable Generate Masks, and get a separate fit file ( with the mask for each image, but is there a way to embed the masks into the files?

Is the a way to add alpha channels using a script? I've tried CreateAplhaChannels, which does the job, but with several hundreds of files, it's pretty time consuming.

Finally: Is there an easy way to remove alpha channels? Right now, I split the image into R, G and B and then combine those with LRGBCombination.

I've been thinking about using another software to convert the jpg files into tif or png and add an alpha channel using other software, as alpha channels works fine in PI once they are embedded, but I'd like to try to make all processing  in PI.

Thanks in advance


Edit: ExtractAlphaChannels has an option that deletes the alpha channel. Sorry.


Hello everybody

Some time ago I sugested a collaboration on The Danish Astronomical Society's forum. We ended up being 8 amateur astronomers using 7 very different telescopes. The result was the planetary nebula Jones-Emberson 1. I collected all the data, and made a composite using PI. The image was selected as Image Of The Day at Astrobin.

The collaboration gave me an idea: What if I could get every amateur astronomer in the world to capture photons from the same object, just one night, and then combine all the data into one single image. The idea haunted me for a while, and I decided to make some tests, using whatever images I could find on the internet. The early tests looked promising, and I chose to try to gather as many pictures of the Andromeda Galaxy as I could, and combine them all. I ended up with 556 images, and the result blew me away.

When you have that many data, the combined image has extremely high SNR, due to several thousand hours of exposure. On the JNER1 project we ended up with 125+ hours, but thousands of hours really make a difference. Furthermore I noticed some interesting side effects. I did all the work in 100 MPixel, so I had to make substacks of 50 images to be able to handle all the data, even on a dual Xeon Mac Pro with 16 GB memory and a SSD raid. When I compared the substacks they looked very similar:

We all have seen images of the object we are working with, so we have an idea of what we would like it to look like. However there are smaller or bigger differences in the final results. Some images are to green others to magenta. Sometimes the guiding drifts a little, and perhaps the image hasn’t been flattened correctly. All those small differences are averaged using this method. The red version is compensated by the green one, the elongated stars in one direction is compensated by another image with stars elongated in another direction, and so on. Of course our personal preferences will show through the averaged stacks, but people tend towards a neutral white balance in RGB and LRGB images, and PI has tools for making that.

Most of all, noise is minimized to the extreme. Normally when we work on an image, there is a certain amount of exposure, which is optimal. The SNR of a stack of images, increases with the square root of the increase in exposure. Twice as long exposure gives 1.4 times better SNR, and 10 times longer exposure gives 3.2 times better SNR. Furthermore most of us struggle to find enough CS nights, so normally we end up in the 1-20 hour range.

The following is a rough description of the method I’ve developed during tens of projects so far.

First i gather images. I now only use Creative Commons licensed images. Astrobin, Flicker, Google Images, Wikimedia commons etc. are excellent sources. Be careful not to use copyrighted images, and also make sure that you are allowed to “modify, adapt, or build upon” the images you use. I’ve found this webpage especially useful:

However the internet is a goldmine, once you start searching. Remember that you must credit the people who’s images you use, so write them down while collecting images. I personally use a spreadsheet. Also remember to share your final CI image with a CC license.

The first step I do is to roughly crop all the images. Remove frames, text and noisy edges etc. Then I rotate (and/or flip) each image to get roughly the same side up. Next I choose a master image, and rescale it to twice the resolution I want to end up with. I save that image as a “TransformMaster”.

Then I register all images in Pixinsight using StarAlign. I use distortion correction, to compensate for the different optics and enable “Generate Masks”. More on that later. Normally most images will register, but you might get errors. In that case ImageIntegration skips to the next image. In most cases around 80-90% of the images works fine. The ones that didn’t register, can be further cropped, to see if that helps, and it often does.

After registering I make an ImageIntegration of all the registered images. Use equal weight (1:1) and disable normalization. The monochrome registered images has to be converted to RGB first. Because not all images cover the full field of the TransformMaster, you will get vignetting towards the edge of the image. This is where the masks come in handy. First integrate all of them again using 1:1 weight and no normalization. Then you use the stacked masks as a “flat frame”. In PixelMath you make a simple formula: RGB_Integration/Masks_Integration, and create a new image using that. The result is a perfectly flat field.

A few examples. First the averaged RGB stack:

Next the masks combined:

And finally a PixelMath of RGB/masks:

This is the basic method and will get you a long way. Next some more detailed processes I’ve found useful:

Luminance is a major part of the visual experience, so I use all sorts of images for making a luminans layer. Some RGB, others narrowband and even monochrome like Ha are all combined, and luminance is extracted from that. Furthermore, I make two stacks. One with the sharpest half of the images, and one with the ones with the best signal. I then combine those using masks.

The FlatRGB stack has very good color fidelity. If you’ve blended RGB and Narrowband, you can make separate stacks of RGB and HST palette if you want. But basically you don’t need to do much to the color layer, and you definitely shouldn’t change the color balance of the RGB stack.

With the LuminanceStack, on the other hand, you can go crazy. A Deconvolution with a generated PSF is a good start. Because the stack has such high SNR, you can use most filtering and processing a lot more effective than on a normal stack, without having problems with noise. That is very very motivating.

Once you’ve used all your tricks on the Luminance layer, try to start all over. Do that several times, and combine your different versions into one (perhaps even using masks). Even here you often get a better result by stacking your efforts.

Finally use LRGB combination to mix your final Luminance with the RGB stack.

There is a lot of room for improvements in the method mentioned above, but it gives a rough idea of the principle. You can make a stack of HSO narrowband, and extract each channel for luminance use. You can use PixelMath to subtract one channel from another to pick out certain details etc. It’s very inspiring to be able to experiment with the stacks, so knock yourselves out :)

I've used the technique on more than 20 objects so far, and the biggest problem is finding enough CC licensed images to use. Especially for the more rare objects. So if you want your name in future credit lists, share your images. Let's face it. We are probably never going to get rich from our hobby anyway. Thanks in advance.

Clear skies

Morten :)

PS. Here is a few links to images I’ve made using the method, including M31 in 36 MPix resolution with 300+ images:

And a large version of the Trunk:

Image credits for IC1396:

Adam Evans, Alexis Tibaldi, Álvaro Pérez Alonso, Andolfato, Arturo Fiamma, AstroGG, ASTROIDF, Chris Madson, Claustonberry, dave halliday, dyonis, Eric Pheterson, Frank Zoltowski, Fred Locklear, Gaby, Giuliano Pinazzi, Jorge A. Loffler, Juan Lozano, Jürgen Kemmerer, Jussi Kantola, Konstantinos Stavropoulos, Lonnie, Luca Argalia, Luigi Fontana, Michele Palma, Mike Markiw, milosz, Miquel, Morten Balling, NicolasP, Pat Gaines, Paul M. Hutchinson, PaulHutchinson, Peter Williamson, Phil Hosey, Phillip Seeber, Ralph Wagter, Richie Jarvis, RIKY, s58y, Salvatore Iovene, Salvopa, Stephane_B, Steve Yan, stevebryson, Thomas Westerhoff & Werner Mehl

General / Pedistal
« on: 2013 November 04 07:37:58 »
In the beginning of my PI experience, I used the BPP script for calibration (and sometimes stacking). Recently, I've been trying to improve my calibration, and have done things manually. I've noticed an error I made, that might come in handy for other users:

I currently use an Orion Starshoot Pro v1 OSC camera. When i measure Median (using Statistics) I measure pretty odd values on my Bias/Dark stacks. A recent bias stack measures K=0.015 and the dark stack measures K=0.014. With an automatic STF, they look rather different though. The bias stack is way smother than the darks (100 bias vs 60 darks@300s). The frames were shot outside, but since the camera has unregulated cooling, the ambient temperature might have varied up to 5 deg Celsius.

The normal procedure for calibrating the dark stack is to subtract the bias stack:

Calibrated Darks = Darks - Bias

However the subtraction will give negative results for the calibrated darks, and for some bizarre reason PI truncates those values to zero. That means, that you end up with a calibrated dark stack consisting of hot pixels, and all dark noise is set to zero! Using such a calibrated dark stack, to calibrate your lights, won't remove any dark noise at all...

Sometimes the measured K values of the stacks are very close, meaning that some dark noise will be positive values after calibration, but all the negative values are still truncated, rendering the calibrated dark stack pretty useless.

The solution is to enable pedistal while using ImageCalibration in PI. This adds a small value to the calibration, so you get rid of the negative values, and the truncation. At least on my images, the difference was amazing. I've previously seen K values that were very close on DSLRs as well, so be aware if you calibrate your darks (good for scaling them).

Morten :)

General / Combining stacks with different SNRs
« on: 2013 May 25 09:32:39 »
Hi all,

I'm currently working on a project, with data gathered by different people/telescopes/cameras.

I have three different luminance stacks. First i did a Linear fit. Next I measured Median and StdDev on a small preview of the background. Two of the stacks have a SNR=400, and the third has a SNR=1300.

I then tried to combine the stacks using PixelMath using the formula:


However, when I measure SNR of the resulting image, I get a value of 1100, which is lower than Stack3.

What am I doing wrong? Is my method wrong?

Thanks in advance



General / Intergrating / combining only two images
« on: 2013 May 08 02:55:32 »
Hello all,

Tried to search the forum for this, but couldn't find anything. I'm new to Pi, so this is probably easy.

I'm currently working on a project, involving data from six different telescopes, all capturing the same object. I've successfully combined seven different Ha stacks, from different members of the team, using PI's ImageIntegration.

However, I only have two Luminance stacks, that I need to combine. ImageIntegration doesn't seem to work with less than three images.

What to do?

Thanks in advance

Morten  :smiley:

General / Beginners questions / Masterframes
« on: 2012 September 20 15:26:46 »
Hello everybody.

My name is Morten, and this is my first post.

I have worked as a VFX compositor and colorist for more than 20 years in the movie industry. A little over a year ago, I started doing astrophotography. At first, I used a DSLR on a cheap tripod, but I quickly ended up with several telescopes, mounts, cameras, filters, adapters and a truckload of cables. I'm currently using an Orion SSP (v1) OSC camera, and Maxim for capturing raw fits.

Up until now, I've done all the image processing using homemade scripts in a VFX software called NukeX. (It's similar to Adobe After Effects). Image calibration and integration has been done with simple algebra, like addition, subtraction and division. I now feel I'm ready to start using more dedicated astro software, and I'm currently evaluating PixInsight. Excellent software, but with a rather steep learning curve! ;) I've watched all of Harry's video tutorials. Thanks Harry! Keep 'em coming! (I also watched the ones read by Stephen Hawking ;D, but I like the no-nonsense ones by Harry better)

I have few questions:

1. When creating master calibration frames like MasterBias and MasterFlat, I've found a thread on this forum, saying that one should use average integration, with SigmaClip. Should I use normalization? If so, which one?

Btw. There is a rather substantial difference between using SigmaClip and not using it, when comparing the MasterBias frames with the same screenstretch. Apart from cosmic rays, what are the reason for using SigmaClipping here? Shouldn't at least "Clip low range" be disabled, considering dead/lazy pixels?

2. In general terms, what's the difference between using Median and Average+SigmaClip?

Best regards and clear skies

Morten :)

Pages: [1]