Author Topic: Crowd Sourced Astro Images (Crowd Imaging or CI)  (Read 11935 times)

Offline MortenBalling

  • Member
  • *
  • Posts: 74
    • View Profile
Crowd Sourced Astro Images (Crowd Imaging or CI)
« on: 2014 September 12 08:55:40 »


Hello everybody

Some time ago I sugested a collaboration on The Danish Astronomical Society's forum. We ended up being 8 amateur astronomers using 7 very different telescopes. The result was the planetary nebula Jones-Emberson 1. I collected all the data, and made a composite using PI. The image was selected as Image Of The Day at Astrobin.

http://www.astrobin.com/55058/

The collaboration gave me an idea: What if I could get every amateur astronomer in the world to capture photons from the same object, just one night, and then combine all the data into one single image. The idea haunted me for a while, and I decided to make some tests, using whatever images I could find on the internet. The early tests looked promising, and I chose to try to gather as many pictures of the Andromeda Galaxy as I could, and combine them all. I ended up with 556 images, and the result blew me away.

When you have that many data, the combined image has extremely high SNR, due to several thousand hours of exposure. On the JNER1 project we ended up with 125+ hours, but thousands of hours really make a difference. Furthermore I noticed some interesting side effects. I did all the work in 100 MPixel, so I had to make substacks of 50 images to be able to handle all the data, even on a dual Xeon Mac Pro with 16 GB memory and a SSD raid. When I compared the substacks they looked very similar:



We all have seen images of the object we are working with, so we have an idea of what we would like it to look like. However there are smaller or bigger differences in the final results. Some images are to green others to magenta. Sometimes the guiding drifts a little, and perhaps the image hasn’t been flattened correctly. All those small differences are averaged using this method. The red version is compensated by the green one, the elongated stars in one direction is compensated by another image with stars elongated in another direction, and so on. Of course our personal preferences will show through the averaged stacks, but people tend towards a neutral white balance in RGB and LRGB images, and PI has tools for making that.

Most of all, noise is minimized to the extreme. Normally when we work on an image, there is a certain amount of exposure, which is optimal. The SNR of a stack of images, increases with the square root of the increase in exposure. Twice as long exposure gives 1.4 times better SNR, and 10 times longer exposure gives 3.2 times better SNR. Furthermore most of us struggle to find enough CS nights, so normally we end up in the 1-20 hour range.

The following is a rough description of the method I’ve developed during tens of projects so far.

First i gather images. I now only use Creative Commons licensed images. Astrobin, Flicker, Google Images, Wikimedia commons etc. are excellent sources. Be careful not to use copyrighted images, and also make sure that you are allowed to “modify, adapt, or build upon” the images you use. I’ve found this webpage especially useful:

http://search.creativecommons.org

However the internet is a goldmine, once you start searching. Remember that you must credit the people who’s images you use, so write them down while collecting images. I personally use a spreadsheet. Also remember to share your final CI image with a CC license.

The first step I do is to roughly crop all the images. Remove frames, text and noisy edges etc. Then I rotate (and/or flip) each image to get roughly the same side up. Next I choose a master image, and rescale it to twice the resolution I want to end up with. I save that image as a “TransformMaster”.

Then I register all images in Pixinsight using StarAlign. I use distortion correction, to compensate for the different optics and enable “Generate Masks”. More on that later. Normally most images will register, but you might get errors. In that case ImageIntegration skips to the next image. In most cases around 80-90% of the images works fine. The ones that didn’t register, can be further cropped, to see if that helps, and it often does.

After registering I make an ImageIntegration of all the registered images. Use equal weight (1:1) and disable normalization. The monochrome registered images has to be converted to RGB first. Because not all images cover the full field of the TransformMaster, you will get vignetting towards the edge of the image. This is where the masks come in handy. First integrate all of them again using 1:1 weight and no normalization. Then you use the stacked masks as a “flat frame”. In PixelMath you make a simple formula: RGB_Integration/Masks_Integration, and create a new image using that. The result is a perfectly flat field.

A few examples. First the averaged RGB stack:



Next the masks combined:



And finally a PixelMath of RGB/masks:



This is the basic method and will get you a long way. Next some more detailed processes I’ve found useful:

Luminance is a major part of the visual experience, so I use all sorts of images for making a luminans layer. Some RGB, others narrowband and even monochrome like Ha are all combined, and luminance is extracted from that. Furthermore, I make two stacks. One with the sharpest half of the images, and one with the ones with the best signal. I then combine those using masks.

The FlatRGB stack has very good color fidelity. If you’ve blended RGB and Narrowband, you can make separate stacks of RGB and HST palette if you want. But basically you don’t need to do much to the color layer, and you definitely shouldn’t change the color balance of the RGB stack.

With the LuminanceStack, on the other hand, you can go crazy. A Deconvolution with a generated PSF is a good start. Because the stack has such high SNR, you can use most filtering and processing a lot more effective than on a normal stack, without having problems with noise. That is very very motivating.

Once you’ve used all your tricks on the Luminance layer, try to start all over. Do that several times, and combine your different versions into one (perhaps even using masks). Even here you often get a better result by stacking your efforts.

Finally use LRGB combination to mix your final Luminance with the RGB stack.

There is a lot of room for improvements in the method mentioned above, but it gives a rough idea of the principle. You can make a stack of HSO narrowband, and extract each channel for luminance use. You can use PixelMath to subtract one channel from another to pick out certain details etc. It’s very inspiring to be able to experiment with the stacks, so knock yourselves out :)

I've used the technique on more than 20 objects so far, and the biggest problem is finding enough CC licensed images to use. Especially for the more rare objects. So if you want your name in future credit lists, share your images. Let's face it. We are probably never going to get rich from our hobby anyway. Thanks in advance.

Clear skies

Morten :)

PS. Here is a few links to images I’ve made using the method, including M31 in 36 MPix resolution with 300+ images:

http://www.astrobin.com/119854/

http://www.astrobin.com/120204/

And a large version of the Trunk: http://cdn.astrobin.com/images/thumbs/486ca58dfac069513c39aa23d13bad33.16536x16536_q100_watermark.jpg

Image credits for IC1396:

Adam Evans, Alexis Tibaldi, Álvaro Pérez Alonso, Andolfato, Arturo Fiamma, AstroGG, ASTROIDF, Chris Madson, Claustonberry, dave halliday, dyonis, Eric Pheterson, Frank Zoltowski, Fred Locklear, Gaby, Giuliano Pinazzi, Jorge A. Loffler, Juan Lozano, Jürgen Kemmerer, Jussi Kantola, Konstantinos Stavropoulos, Lonnie, Luca Argalia, Luigi Fontana, Michele Palma, Mike Markiw, milosz, Miquel, Morten Balling, NicolasP, Pat Gaines, Paul M. Hutchinson, PaulHutchinson, Peter Williamson, Phil Hosey, Phillip Seeber, Ralph Wagter, Richie Jarvis, RIKY, s58y, Salvatore Iovene, Salvopa, Stephane_B, Steve Yan, stevebryson, Thomas Westerhoff & Werner Mehl
« Last Edit: 2014 September 12 09:40:48 by MortenBalling »

Offline pfile

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 4638
    • View Profile
Re: Crowd Sourced Astro Images (Crowd Imaging or CI)
« Reply #1 on: 2014 September 12 08:59:23 »
this is really interesting - i sort of thought the fact that all the images are stretched, and stretched differently, would negatively affect the outcome. but it seems not.

very cool.

rob

Offline chris.bailey

  • PixInsight Addict
  • ***
  • Posts: 235
    • View Profile
Re: Crowd Sourced Astro Images (Crowd Imaging or CI)
« Reply #2 on: 2014 September 12 09:08:31 »
Very cool indeed. I have done combinations of four or five peoples images but only in the calibrated fits format but this is in all altogether different league. The masks trick is a great one!

Chris

Offline MortenBalling

  • Member
  • *
  • Posts: 74
    • View Profile
Re: Crowd Sourced Astro Images (Crowd Imaging or CI)
« Reply #3 on: 2014 September 12 09:32:27 »
Thanks :)

@Rob & Chris

In a perfect world, having all data as linear fits would be awesome, but just keeping track of the images used here is quite a task. I first thought about emailing as many people as I could, but I also learned from the JNER1 project, that it's a lot of work just to keep track of what you receive. Sometimes members sent me new stacks, that included older data, and I tried to avoid using the same photons twice, as it just introduce more noise. The more data the merrier. I've made tests that I sadly can't show publicly, because they included copyrighted material, but the Draco Dwarf Galaxy and M78 looks pretty cool, when one is able to stretch the field as much as this technique allows. I know the field on the M31 is pretty deep, but I haven't found a way to measure it on nonlinear data.

If you have full coverage in all the datasets then this method isn't really optimal. Then normal stacking with weight based on SNR etc. is better. However the mask flattening is useful whenever the coverage is not 100%. The idea behind CI is that you use a truckload of data, and therefore you can accept loosing a little information by stacking with a simple average. I'm finding new techniques all the time as I go along, and if I have many images, I make separate stacks with sharp images, low noise images, low exposure (for example for Trapezium), and so on. I then combine those into a luminance Master for further processing.
« Last Edit: 2014 September 12 09:54:10 by MortenBalling »

Offline pfile

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 4638
    • View Profile
Re: Crowd Sourced Astro Images (Crowd Imaging or CI)
« Reply #4 on: 2014 September 12 12:08:52 »
so how do you deal with differing FWHM in the different images? or if you limit yourself to similar FOVs are you seeing comparable FHWM across images?

rob

Offline MortenBalling

  • Member
  • *
  • Posts: 74
    • View Profile
Re: Crowd Sourced Astro Images (Crowd Imaging or CI)
« Reply #5 on: 2014 September 12 12:30:00 »
@Rob

I don't :)

Everything is equaled by the stacking, even FWHM. I normally start out with one of my own images, resample it like 400% and use that as TransformMaster. I haven't been around the new Drizzle features in PI yet, but working in major oversampling has been my trick so far. After aligning I look at the MasksStack and choose a framing that will give me good coverage, typically at least 50% at the edges of the final image. That equals out the FWHM, but of course you get best FWHM in the center of the image. Not that different from when I stack my own data, especially earlier on, when I used a large chip on a C8, which has a small corrected field. Bad combo!

After integration the image/stars are rather soft all over the field, so the first thing I do is to make a PSF and a deconvolution, to see what that does. Many times it's easier to use manual Deconvolution (in steps), using masks made with ATWTs. Oh yeah, btw: Van Cittert is very effective on some images, because noise is so low. I make several star masks for small and bigger stars, and use a lot of subtle MorphologicalTransform (with amount set to 0.1-0.7). I've also started using two stacks, one with the sharpest images, and one with every image and high SNR. Then I use a combination of the MasksStack and LuminanceMattes and wavelets to blend the sharp stack onto the one with best SNR. A simple LumMatte will get you far.

But all in all, it's statistics (and distortion correction) that does most of the job.

Morten

Edit. The combination of the sharpest detail in the brightest areas, and the softer areas in the darker parts of the field gives a Depth Of Field effect that i like. I've also always been a sucker when it comes to spikes (to the point of using piano wire on my refractor :P), but this method gives some beautiful spikes (without cheating). I'm currently working on M45, and wow does that have some nice spikes.

2nd edit. A non-scientific experience: I've been working with visual effects in the movie industry for almost 25 years. Back when computers were slow, I once tried to down sample 2K images (2048x1536 pixels) to SD Pal (720x576). That removes 8/9 of the image information. Then I upscaled the SD Pal material back to 2K, using a Lanczos algorithm, that uses ringing to introduce sharpness while up sampling. The difference when you blinked between the original and the resampled images were amazingly small. In the cinema, most people won't notice the difference. I think it's due to two things. First our visual system (eye/brain) fills in the missing details, just the way it compensates the eye's blind spot, and secondly you can store a lot of information in a pixel. When you upscale and sharpen, with CI images, you get the information smeared out, and then sharpened. Hope this bable makes sense ;)
« Last Edit: 2014 September 12 13:20:34 by MortenBalling »

Offline georg.viehoever

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2132
    • View Profile
Re: Crowd Sourced Astro Images (Crowd Imaging or CI)
« Reply #6 on: 2014 September 13 10:58:50 »
Do you know this paper http://arxiv.org/abs/1406.1528 ? They are using ranked based statistics to estimate the pixel values, even if the images are non-linear.
Georg
Georg (6 inch Newton, unmodified Canon EOS40D+80D, unguided EQ5 mount)

Offline MortenBalling

  • Member
  • *
  • Posts: 74
    • View Profile
Re: Crowd Sourced Astro Images (Crowd Imaging or CI)
« Reply #7 on: 2014 September 13 11:31:17 »
Do you know this paper http://arxiv.org/abs/1406.1528 ? They are using ranked based statistics to estimate the pixel values, even if the images are non-linear.
Georg

Hi Georg.

No i didn't now that one. I'll give it a try :) I've been thinking about making an image of a dark area of the sky, like behind M51, using this method, to see how deep you can go with a lot of exposure.

Right now, I'm working on M45, which is my favorite part of the (Northern) night. I'm going to try to be extra thorough with that one, trying to keep the nebulosity soft but detailed, and the stars as small as possible, so that hopefully IC396 will be clearly visible. I'm using 252 images, and aim for 100 Mpix.

Cs (not here  ;))

Morten

Offline MortenBalling

  • Member
  • *
  • Posts: 74
    • View Profile
Re: Crowd Sourced Astro Images (Crowd Imaging or CI)
« Reply #8 on: 2014 September 13 21:02:08 »
@Georg

Thanks for the link. Very interesting article! I've only read it quickly so far but there seems to be several good ideas that I'll have to try. My goal has primarily been visual, but we share the same problems, and the way they enhance is especially interesting.

Morten  :)

Offline pfile

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 4638
    • View Profile
Re: Crowd Sourced Astro Images (Crowd Imaging or CI)
« Reply #9 on: 2014 September 13 21:13:03 »
i have been messing around with this with small #s of images (maybe 20 or so) and i'm finding that the mask integration does not completely flatten the image. i have not really spent any time thinking about what's wrong though.

also it seems like if you have "clip low pixels" turned on in ImageIntegration then you'll get an image which is equivalent to the integration of all the mask images in the low clipping output image. again not totally sure if it's 100% the same.

rob

Offline MortenBalling

  • Member
  • *
  • Posts: 74
    • View Profile
Re: Crowd Sourced Astro Images (Crowd Imaging or CI)
« Reply #10 on: 2014 September 14 05:08:31 »
@Rob

I think 20 images is to little. From what my own (visual and non scientific) experiments show, 50 is a good minimum number. 50 images also seems to be the minimum amount, for getting similar colors on different stacks (like Andromeda shown above).

Using smaller numbers of images can be done, but then you will have to preprocess them separately before stacking. Just a simple flattening of the field (ABE, or DBE depending on your time), LinearFit and ColorCalibration will help, but that is more time consuming than gathering 30 images more. Also you loose the interesting averaging of the colors.

Great if the concept is catching on! :) That was sort of my idea. That the sky is very big, and I can't make Crowd Images of it all. I'm looking forward to see what you get out of it. Also sharing experiences is great for learning. I've been holding lecture's about the subject (and smaller collaborations) to danish amateur astronomy societies, and among others, a guy like Johannes Jensen (he's at Astrobin) quickly caught on.

@Georg

Hope you haven't been trying to find IC396 in The Pleiades. When I wrote it, my head was spinning full of numbers. Of course i meant IC349:



Cs

Morten
« Last Edit: 2014 September 14 05:41:21 by MortenBalling »

Offline pfile

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 4638
    • View Profile
Re: Crowd Sourced Astro Images (Crowd Imaging or CI)
« Reply #11 on: 2014 September 14 16:41:17 »
whoops - the low rejection map has to be inverted in order to equal the integration of the masks.

i have to say the hardest part of this process is scouring the web for appropriate images. obviously dustin already solved this problem and i guess the actual reason why astrometry.net was his PhD was to get the community to "do his work for him" for the real projects like comet orbit extraction, etc. but a quick trip to astrometry.net yields a whole lot of duplicate images, with most of the unique ones being unsuited for this purpose (at least on M51, more popular targets may fare better). interestingly when you land on one of the image pages, there's an "Enhance!" button which i suppose runs the algorithm in the paper that Georg pointed to. the results aren't that hot. but i suspect it does not do too well unless you really guide it with good input images.

rob



Offline MortenBalling

  • Member
  • *
  • Posts: 74
    • View Profile
Re: Crowd Sourced Astro Images (Crowd Imaging or CI)
« Reply #12 on: 2014 September 14 17:40:26 »
@Rob

The hardest work, as you mention is gathering the data, and writing down all the credits (I use a spreadsheet). The most time consuming, processing wise, is the alignment. Start with popular object if you want to publish it. I've also made some CI, using whatever best (also copyrighted) pictures I could find (including some of yours :-[), but those were for my own viewing pleasure, and interesting enough, they weren't much better than what I've been able to pull out of Creative Commons images, as long as I use a lot of them.

From back when I started working with digital images, I've always hated noise. Every time you try to work with a noisy image it's uphill. Also, I noticed that Drizzle can be a strong tool together with sharpen enhancements. I have a very visual approach to this, and a personal philosophy, that all the information is in the smear, I just need to get it out. LocalHistogramEqulization seems to be a pretty powerful tool, but because the SNR is very high, and I oversampled heavily while working with the images, you can go much more to some extremes with some parameters, than you normally would.

To sum up:

Find as many images as possible.

Crop them to remove edges.

Select one image for a TransformMaster and resample it to 2-400%.

StarAlign using Distortion Correction (remember to enable 2-D Surface Splines!), enable Generate Masks, and leave everything else default for starters.

Integrate the xxxxx_r.fit files, using Average, No Weights, No Normalization. Disable all Pixel rejection! Also disable Evaluate Noise and select "Average absolute deviation..." as Scale Estimator (speeds up things). Rename to RGB_Stack.

Next integrate the xxxxx_m.fit files, again using the above parameters, but this time chose "Iterative K-sigma..." as Scale Estimator (the only one that works). Rename to Masks_Stack.

Finally use PixelMath to generate a new image using the simple formula RGB_Stack/Masks_Stack.

Bingo! :)

I've not had time to thoroughly read the article yet, but my next idea is to start out with a very wide field image of Cygnus, and find as many images that I can covering parts of that area. As I wrote earlier 50 images seems to give a good result. You can check your coverage, by simply measuring the K-value on the stack. It'll be 1,0 in the center, and should be at least 0.5 at the edges if you use 100 images in total. With the Cygnus Mosaic, I presume I'll have to go down to lower coverage in some areas.

Morten

Offline pfile

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 4638
    • View Profile
Re: Crowd Sourced Astro Images (Crowd Imaging or CI)
« Reply #13 on: 2014 September 14 21:00:10 »
yes i just scraped about 75 images of m42 off of flickr without regard to license so yeah, i won't be posting the result anywhere...

no problem using my images, especially for a project like this! i pretty much never set the license type "properly" on any service, and that's mainly an oversight/laziness.

anyway the result is quite remarkable; as you say the SNR is very high and the data lends itself well to all kinds of sharpening techniques. the most interesting thing to me is that the color apparently converges to the "correct" colors, which came as a surprise. i am starting with images that look "right" to my eye though.

i think i hit a couple of PI bugs along the way. at one point i used the Divide process to "flatten" the image and PI hung; had to attach to gdb to unwedge it. and then i tried creating the drizzle files and it hung during ImageIntegration... need to narrow these down and report them.

rob

Offline alvinjamur

  • Newcomer
  • Posts: 17
    • View Profile
    • Al Vinjamur Photography
Re: Crowd Sourced Astro Images (Crowd Imaging or CI)
« Reply #14 on: 2014 September 15 15:38:57 »
This idea is fantastic, even more the processing!!!!!

Folks here that are playing with the idea : could u kindly "crowd" write a workflow for this?

Technically, u would likely need about 32 images with a decent enough fwhm....but the s/n would go lower the more u add. I'm wondering about the way in which the slope of the s/n curve would tail off as a function of fwhm.....ummm....
« Last Edit: 2014 September 15 15:56:38 by alvinjamur »
----

(2c) || (!(2c)) = !?