Hello everybody
Some time ago I sugested a collaboration on The Danish Astronomical Society's forum. We ended up being 8 amateur astronomers using 7 very different telescopes. The result was the planetary nebula Jones-Emberson 1. I collected all the data, and made a composite using PI. The image was selected as Image Of The Day at Astrobin.
http://www.astrobin.com/55058/The collaboration gave me an idea: What if I could get every amateur astronomer in the world to capture photons from the same object, just one night, and then combine all the data into one single image. The idea haunted me for a while, and I decided to make some tests, using whatever images I could find on the internet. The early tests looked promising, and I chose to try to gather as many pictures of the Andromeda Galaxy as I could, and combine them all. I ended up with 556 images, and the result blew me away.
When you have that many data, the combined image has extremely high SNR, due to several thousand hours of exposure. On the JNER1 project we ended up with 125+ hours, but thousands of hours really make a difference. Furthermore I noticed some interesting side effects. I did all the work in 100 MPixel, so I had to make substacks of 50 images to be able to handle all the data, even on a dual Xeon Mac Pro with 16 GB memory and a SSD raid. When I compared the substacks they looked very similar:
We all have seen images of the object we are working with, so we have an idea of what we would like it to look like. However there are smaller or bigger differences in the final results. Some images are to green others to magenta. Sometimes the guiding drifts a little, and perhaps the image hasn’t been flattened correctly. All those small differences are averaged using this method. The red version is compensated by the green one, the elongated stars in one direction is compensated by another image with stars elongated in another direction, and so on. Of course our personal preferences will show through the averaged stacks, but people tend towards a neutral white balance in RGB and LRGB images, and PI has tools for making that.
Most of all, noise is minimized to the extreme. Normally when we work on an image, there is a certain amount of exposure, which is optimal. The SNR of a stack of images, increases with the square root of the increase in exposure. Twice as long exposure gives 1.4 times better SNR, and 10 times longer exposure gives 3.2 times better SNR. Furthermore most of us struggle to find enough CS nights, so normally we end up in the 1-20 hour range.
The following is a rough description of the method I’ve developed during tens of projects so far.
First i gather images. I now only use Creative Commons licensed images. Astrobin, Flicker, Google Images, Wikimedia commons etc. are excellent sources. Be careful not to use copyrighted images, and also make sure that you are allowed to “modify, adapt, or build upon” the images you use. I’ve found this webpage especially useful:
http://search.creativecommons.orgHowever the internet is a goldmine, once you start searching. Remember that you must credit the people who’s images you use, so write them down while collecting images. I personally use a spreadsheet. Also remember to share your final CI image with a CC license.
The first step I do is to roughly crop all the images. Remove frames, text and noisy edges etc. Then I rotate (and/or flip) each image to get roughly the same side up. Next I choose a master image, and rescale it to twice the resolution I want to end up with. I save that image as a “TransformMaster”.
Then I register all images in Pixinsight using StarAlign. I use distortion correction, to compensate for the different optics and enable “Generate Masks”. More on that later. Normally most images will register, but you might get errors. In that case ImageIntegration skips to the next image. In most cases around 80-90% of the images works fine. The ones that didn’t register, can be further cropped, to see if that helps, and it often does.
After registering I make an ImageIntegration of all the registered images. Use equal weight (1:1) and disable normalization. The monochrome registered images has to be converted to RGB first. Because not all images cover the full field of the TransformMaster, you will get vignetting towards the edge of the image. This is where the masks come in handy. First integrate all of them again using 1:1 weight and no normalization. Then you use the stacked masks as a “flat frame”. In PixelMath you make a simple formula: RGB_Integration/Masks_Integration, and create a new image using that. The result is a perfectly flat field.
A few examples. First the averaged RGB stack:
Next the masks combined:
And finally a PixelMath of RGB/masks:
This is the basic method and will get you a long way. Next some more detailed processes I’ve found useful:
Luminance is a major part of the visual experience, so I use all sorts of images for making a luminans layer. Some RGB, others narrowband and even monochrome like Ha are all combined, and luminance is extracted from that. Furthermore, I make two stacks. One with the sharpest half of the images, and one with the ones with the best signal. I then combine those using masks.
The FlatRGB stack has very good color fidelity. If you’ve blended RGB and Narrowband, you can make separate stacks of RGB and HST palette if you want. But basically you don’t need to do much to the color layer, and you definitely shouldn’t change the color balance of the RGB stack.
With the LuminanceStack, on the other hand, you can go crazy. A Deconvolution with a generated PSF is a good start. Because the stack has such high SNR, you can use most filtering and processing a lot more effective than on a normal stack, without having problems with noise. That is very very motivating.
Once you’ve used all your tricks on the Luminance layer, try to start all over. Do that several times, and combine your different versions into one (perhaps even using masks). Even here you often get a better result by stacking your efforts.
Finally use LRGB combination to mix your final Luminance with the RGB stack.
There is a lot of room for improvements in the method mentioned above, but it gives a rough idea of the principle. You can make a stack of HSO narrowband, and extract each channel for luminance use. You can use PixelMath to subtract one channel from another to pick out certain details etc. It’s very inspiring to be able to experiment with the stacks, so knock yourselves out
I've used the technique on more than 20 objects so far, and the biggest problem is finding enough CC licensed images to use. Especially for the more rare objects. So if you want your name in future credit lists, share your images. Let's face it. We are probably never going to get rich from our hobby anyway. Thanks in advance.
Clear skies
Morten
PS. Here is a few links to images I’ve made using the method, including M31 in 36 MPix resolution with 300+ images:
http://www.astrobin.com/119854/http://www.astrobin.com/120204/And a large version of the Trunk:
http://cdn.astrobin.com/images/thumbs/486ca58dfac069513c39aa23d13bad33.16536x16536_q100_watermark.jpgImage credits for IC1396:
Adam Evans, Alexis Tibaldi, Álvaro Pérez Alonso, Andolfato, Arturo Fiamma, AstroGG, ASTROIDF, Chris Madson, Claustonberry, dave halliday, dyonis, Eric Pheterson, Frank Zoltowski, Fred Locklear, Gaby, Giuliano Pinazzi, Jorge A. Loffler, Juan Lozano, Jürgen Kemmerer, Jussi Kantola, Konstantinos Stavropoulos, Lonnie, Luca Argalia, Luigi Fontana, Michele Palma, Mike Markiw, milosz, Miquel, Morten Balling, NicolasP, Pat Gaines, Paul M. Hutchinson, PaulHutchinson, Peter Williamson, Phil Hosey, Phillip Seeber, Ralph Wagter, Richie Jarvis, RIKY, s58y, Salvatore Iovene, Salvopa, Stephane_B, Steve Yan, stevebryson, Thomas Westerhoff & Werner Mehl