Author Topic: Tutorials Demo  (Read 13375 times)

Offline Harry page

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1458
    • http://www.harrysastroshed.com
Tutorials Demo
« on: 2008 May 20 05:22:12 »
Hello leaders

Most of us do not have the hubble space telescope in our back yard or are sat on some mountain
to obtain image data! So personaly I would like to see some examples of proccessing with data like
a lot of us have to work with, ie not enough exsposure time , light polution, bad seeing, poor tracking!
and I know in a perfect world better data means better images, but I live in the uk and you take what you can get.

What do you think gents?

Regards Harry Page
Harry Page

Offline Simon Hicks

  • PixInsight Old Hand
  • ****
  • Posts: 333
Tutorials Demo
« Reply #1 on: 2008 May 20 10:45:26 »
I would just like to second Harry's suggestion! I too have trouble with some of the issues he raises. And on the West Coast of Scotland I've had about 10 hours of imaging this year.....so I have to process to death any data I can get my hands on.  :lol:

I am loathed to suggest you drop the product development, or the documentation work.....both of which are really big and extremely important tasks......but maybe the next time a tutorial is created then could it start with some humble data....not from NASA!   :lol:   Noisey with bad star shapes at a minimum.

By the way, I DO love to see the wonderful APOD's as well.....so please don't stop doing those either.

Cheers
             Simon

esraguin

  • Guest
Image Data
« Reply #2 on: 2008 May 20 11:50:16 »
Just read the the coments about types of data ,And i agree, It would be great if a tutorial was put together from the likes of My Meade FITS3p so we can see how much we can expect from data collected at ground level , Im from Oban in the West Coast of Scotland and again limited in getting long exposure data .

Love the site, and the images that i see ,this inspires me to press on with what i can get from my setup.

Alex Maclean Wilson

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Tutorials Demo
« Reply #3 on: 2008 May 20 12:24:37 »
OK folks, I get your point :)

I have a proposal for you. If you upload raw images, we can try to assemble a basic tutorial.

The raw images should be already calibrated and packed as a single zip file. If you need a site for uploading, please email me and I'll give you the details of an ftp account on our server.

Looking forward to your data ;)
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Harry page

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1458
    • http://www.harrysastroshed.com
Tutorials Demo
« Reply #4 on: 2008 May 20 13:21:05 »
Hi Juan

Uploaded is m3 file for your consideration!
As this is a globular perhaps other offers could be a galaxy and a nebula

Regards Harry Page
Harry Page

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Tutorials Demo
« Reply #5 on: 2008 May 21 00:11:00 »
Thanks Harry, I'm going to download the file and will get back when I have something.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Tutorials Demo
« Reply #6 on: 2008 May 23 02:53:33 »
Ok, here I am. Sorry for the late post, but I've been very busy setting up a new Linux box that will allow us to create the new 64-bit and 32-bit versions of PixInsight.

Harry Page has kindly uploaded a raw image of M3 to our anonymous ftp server. I have used this image to create a basic processing example, which I'll describe below. I stress the word *basic* here: don't expect a spectacular result or a sophisticated procedure, but just something basic and simple that you can use as a starting point to process your images. I think this is what was intended with this thread.

Here is the final processed image at 1024 pixels wide:

http://pixinsight.com/tmp/m3-hpage/totalstack_pi.1024.jpg

======================================================================

1. Open raw image and apply a Screen Transfer Function (STF)

http://pixinsight.com/tmp/m3-hpage/01.jpg

As you probably know already, a STF allows us to apply a nonlinear stretch to the screen rendition of an image in PixInsight. A STF doesn't modify the target image in any way, but just its representation on the screen. STFs are extremely useful because they allow us to inspect and evaluate raw linear images. As you know, a deep-sky linear image acquired with a CCD or CMOS sensor usually concentrates most of the data in a narrow peak near the left end of the histogram (shadows). For this reason, these images are very dark, almost black, when they are represented on nonlinear display media adapted to the human vision, such us a computer monitor for example.

The ScreenTransferFunction interface is available on the Process Explorer window, under the Transfer Curves category. It is also included in the default Favorites category.

Indeed the image has strong light pollution gradients, and also shows some vignetting. This can be readily seen on the screenshot above.

2. Initial Crop

http://pixinsight.com/tmp/m3-hpage/02.jpg

I decided to crop the image for two reasons. First, the main object is not centered in the frame, and as far as I know, there are no other interesting objects in it (please correct me otherwise). Second, I prefer to crop out the corners where the vignetting is severe.

The DynamicCrop process is under the Geometry category.

3. First try with DynamicbackgroundExtraction (DBE)

http://pixinsight.com/tmp/m3-hpage/03.jpg

The DBE process allows us to generate synthetic background models. Such models are accurate representations of the sky background as it is represented on the image, and can have many applications. In our case, we are going to use a background model to correct for sky gradients and vignetting.

As you see on the screenshot, I used nearly default DBE parameters. I just had to decrease the  Minimum sample weight value to obtain automatically generated samples over the darkest corners in the image. Then I generated 16 samples per row by clicking the Generate button.

4. Inspecting DBE samples

http://pixinsight.com/tmp/m3-hpage/04.jpg

The new DBE interface in PixInsight Standard is much better than the old version that was included in PixInsight LE. One of the most important improvements is its ability to reject small bright objects in DBE samples. You can verify how this works on the screenshot. As you see, I selected a sample that falls over some stars. The square graphic on the DBE interface represents the image pixels in the currently selected DBE sample. Black pixels on the graphic correspond to image pixels that are being completely rejected, and hence not considered as part of the background. White pixels in the graphic are pure background pixels, and gray pixels represent image pixels as a function of their statistical relevance in terms of their belonging to the background.

5. Evaluating the generated background model

http://pixinsight.com/tmp/m3-hpage/05.jpg

This screenshot shows the generated background model. It isn't a good model for an obvious reason: it has a bright central region, which reveals that the model has included too many pixels from cluster stars. If we'd use this model to correct for uneven illumination, we'd overcorrect the central regions.

So let's restart from scratch :)

6. Controlling the Tolerance DBE parameter

http://pixinsight.com/tmp/m3-hpage/06.jpg

The Tolerance DBE parameter controls how restrictive is the pixel rejection algorithm. A lower tolerance means that more pixels will be rejected and considered as not belonging to the background. As you see in the screenshot, I lowered Tolerance to 0.25. Other than that, I just generated 16 samples per row automatically, as before.

7. DBE, Second Round

http://pixinsight.com/tmp/m3-hpage/07.jpg

Now we have a much better background model. Note that it doesn't show a central prominence as in step 5. It probably could be further improved by decreasing tolerance a bit more.

8. Subtracting the background model

http://pixinsight.com/tmp/m3-hpage/08.jpg

The PixelMath interface can be used to operate with images in many different and sophisticated ways. In our example, we'll use it just to subtract the background model from the original image. As you can see, this is very easy: just write the corresponding expression (formula) using the identifiers of the involved images.

In the expression, $target means "the image to which I'll apply PixelMath". It could be replaced with "totalstack" and the result would be the same in this case.

Why subtract? Because sky gradients due to light pollution are additive effects, i.e. they are caused by light that is being summed to the image during the exposure time. Vignetting, on the other hand, is a multiplicative effect: it filters the light in a constant way (for a given pixel) before it reaches the sensor during the whole exposure. So to correct for vignetting, we should divide the image by the background model, just as a standard flat fielding operation. We actually have a mixture of additive and multiplicative uneven illumination effects in this example. In these cases, it is customary to subtract the background, which usually gives sufficiently accurate results (mainly because sky gradients are usually much stronger than vignetting).

9. Background subtracted

http://pixinsight.com/tmp/m3-hpage/09.jpg

This is what we get after subtracting the background model to the original image. The result is not satisfactory. The reason is that there are some uneven illumination artifacts, as dark blotches, distributed over the whole image. This is likely due to a less-than-perfect image calibration; specifically, to bad flat field frames. In addition, the upper right corner still shows some vignetting. So we haven't finished our work with DBE: another background modeling/correction step is necessary.

10. Second DBE iteration

http://pixinsight.com/tmp/m3-hpage/10.jpg

Here you have the second DBE parameter set that I used. This time the Tolerance value is much more restrictive: 0.1. This is because after the first DBE iteration, now we have a much more uniform background, so we have to correct for relatively slight irregularities in the background.

I generated much more samples this time: 24 samples per row. This is necessary to model well the relatively small dark artifacts that are to be fixed.

Note also that I manually edited a few samples (deleted some, moved others) near the cluster. This is to prevent contributions from stars to the new background model.

11. Second DBE background model

http://pixinsight.com/tmp/m3-hpage/11.jpg

This is the second background model. Quite good, I'd say 8)

12. Background subtracted, second iteration

http://pixinsight.com/tmp/m3-hpage/12.jpg

Now we have a very uniform illumination over the whole image. Mission achieved!

13. Conversion to 32-bit floating point format

http://pixinsight.com/tmp/m3-hpage/13.jpg

This step is not strictly required in this example -since we are doing quite simple things, 16 bits would suffice. In general, however, working with a 32-bit format is highly recommended.

The 32-bit floating point data format will allow us to apply some aggressive processing transformations without problems due to accumulated roundoff errors. For more complex processing works, PixInsight fully supports the integer 32-bit format, which provides a huge working space of 2^32 discrete sample values. For even more critical processing tasks (mostly high dynamic range transformations), the 64-bit floating point format (10^15 discrete values) is also available.

Note that the image, at this point, is still linear. Remember that so far we have been working with an active Screen Transfer Function (STF). In the screenshot above, the STF has been reset, and for this reason the image appears very dark on the screen.

14. Nonlinear stretch with HistogramTransform

http://pixinsight.com/tmp/m3-hpage/14.jpg

The screenshot speaks for itself. As always, this step is crucial and must be done very carefully -what the final image will be depends largely on how this initial stretch is performed. Remember these two rules:

- Do not clip a single pixel at the highlights. Never. No excuses.
- The sky is not black. Don't pretend that.

15. HDR Wavelet Transform

http://pixinsight.com/tmp/m3-hpage/15.jpg

The HDRWaveletTransform process (Wavelets category) is an ideal tool to work with globular clusters. This is because the core of a globular cluster, as happens with the bright cores of most galaxies for example, poses a high dynamic range problem.

Definitely, forget other primitive approaches (yes, I refer to DDP) and take advantage of cutting-edge algorithms and implementations in PixInsight. The screenshot above shows what I'm talking about.

The HDR wavelet transform algorithm doesn't work well with linear data. This is why we have to apply it after the initial nonlinear stretch.

16. Building an ACDNR mask

http://pixinsight.com/tmp/m3-hpage/16.jpg

Time to go for some noise reduction. ACDNR has an integrated mask generation feature that works very well with the Real Time Preview interface.

Remember that masks protect where they are black. We want to protect the stars in the cluster's core and it's halo, while we want full noise reduction over the background, where the signal-to-noise ratio is very low.

17. Defining ACDNR parameters for the luminance

http://pixinsight.com/tmp/m3-hpage/17.jpg

ACDNR performs separate noise reductions for the luminance and chrominance components of the image. The most efficient method for fine tuning of ACDNR parameters is disabling the chrominance part (select the Chrominance tab page and uncheck the Apply option) to work with one or more previews in Luminance display mode.

Edge protection parameters must be accurately adjusted to avoid damaging significant image structures. These are the most critical ACDNR parameters. Experience and practice are required to achieve the best results; noise reduction is always a delicate task and each image is a completely different problem.

18. Use previews

http://pixinsight.com/tmp/m3-hpage/18.jpg

Always use previews defined on image regions of particular interest. They will save you a lot of time and will allow you to achieve the best possible results.

19. Defining ACDNR parameters for the chrominance

http://pixinsight.com/tmp/m3-hpage/19.jpg

Chrominance ACDNR parameters usually are less critical than their luminance counterparts. This is because most of the detail in the image is perceived through the luminance. However, applying too aggressive noise reduction to the chrominance will lead to destruction of small-scale structures with high chrominance contents. Small stars are good examples.

To work with chrominance ACDNR parameters, you select the Luminance tab page and uncheck the Apply option, then check the corresponding option on the Chrominance page. The "CIE a*=R b*=G" display mode is very useful to work with the chrominance. In this mode, the CIE a* component is represented as red on the screen, and the CIE b* is represented as green. The advantage of this display mode is that you can see the chrominance of the image completely isolated from its luminance. Another option, although less appropriate for noise reduction, is the "L*=0.5" display mode, in which the chrominance is shown in true color.

20. Chrominance noise reduction in "CIE a*=R b*=G" display mode

http://pixinsight.com/tmp/m3-hpage/20.jpg

Another screenshot, mainly for before/after comparison.

21. ACDNR results

http://pixinsight.com/tmp/m3-hpage/21.jpg

Finally, noise reduction must be tested on some previews for both luminance and chrominance (in the normal RGB display mode, of course) before applying the process to the image. I think the obtained result is quite good in this example, especially considering the high amount of noise that we have had to deal with.

22. Additional stretch after noise reduction

http://pixinsight.com/tmp/m3-hpage/22.jpg

After noise reduction, the histogram of the image is no longer dominated by the noise. This has a very nice consequence: there is an unused part of the histogram at the shadows, which can be clipped to improve the overall contrast.

In the screenshow, note that I clipped just 52 pixels in the image (a 0.0015%). The amount of clipped pixels must always be watched carefully on the HistogramTransform interface.

23. Large scale noise reduction

http://pixinsight.com/tmp/m3-hpage/23.jpg

If desired (not actually necessary in this example), a second instance of ACDNR can be applied for noise reduction at larger scales. Note the very protective mask and edge protection values, combined with a large standard deviation of the ACDNR filter.

======================================================================

Final considerations

One problem with this image is that it is very weak in the reds. For this reason we don't see red stars in the cluster, which is a pity because they are an important ingredient of globular cluster images. For this reason also, I haven't applied a color saturation boost, which normally would be a standard step in a processing work like this one. By increasing color saturation for the reds in this case, a big amount of noise is generated without further benefits. I don't know the cause for this lack in the reds, but I suspect that the origin of this problem is in the preprocessing.

As you'll see in all the screenshots that I've included in this example, I saved all processes as process icons on PixInsight's workspace. These icons can be saved as a .psm file for later use. Process icons are very important in PixInsight, and one of its strongest points. Here is the .psm file that I saved for this example:

http://pixinsight.com/tmp/m3-hpage/m3_hpage.psm

To load this file, right-click on the workspace and select Load Process Icons from the context menu.

The image is very nice and, once correctly processed, it is much better than many globular cluster images that we can see posted regularly on forums, even many images acquired with superior instruments. As you can see, the fact that you live in a light polluted area must not stop you from imaging the deep sky.

Thanks to Harry Page for letting us work with his raw data.

Time permitting, I'll be glad to give a try to more raw data, if you think I can help.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Harry page

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1458
    • http://www.harrysastroshed.com
Tutorials Demo
« Reply #7 on: 2008 May 23 10:47:03 »
Hi Juan

Thanks for doing this and I appreciate the time you have spent doing this as even though your software
is very powerful its a bit daunting when you first start and I am sure this will improve when all the documentation is complete.
With regard to the data I provided I should say the following

1) M3 was not centralised due to having a problem finding a guide star when using a OAG ( I use this as
    I get differential flexure using a guide scope)

2) Astroart does seem to produce images that are slightly pink ie not red enough! and now you know
    why in another thread why I asked about star saturation.

3) There is in background the very hint of some small background galaxies

4) Somebody has better equipment than me !!!!!!!!!!!!!!!!!!

For viewers info this was imaged as follows


1) 14" Newtonian ( Fork mounted )   www.harrysastroshed.co.uk
2) starlight xpress sxv m25 with IADS filter
3) Guided with sxv autoguider and OAG
4) Imaged in the full moon this is why it lacks depth! ( can not waste a clear night in the uk )
5) taken over two nights with total exposure of 5 hours  ( 10 min subs )
6) no darks were used but flats and bias were  ( Still need to work on my flats)

Regards Harry Page
Harry Page

Offline Harry page

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1458
    • http://www.harrysastroshed.com
Tutorials Demo
« Reply #8 on: 2008 May 23 13:37:11 »
Hello Leaders

Here is my little go and I put a little more colour in!

http://tinyurl.com/57eh3a

Regards Harry Page
Harry Page

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Tutorials Demo
« Reply #9 on: 2008 May 26 01:35:50 »
Hello Harry,

This version is very nice! How did you manage to achieve that color saturation? Did you preprocess the image in a different way?

Excellent result, especially taking into account the conditions this image has been taken, Moon and all. Way to go ;)
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Harry page

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1458
    • http://www.harrysastroshed.com
Tutorials Demo
« Reply #10 on: 2008 May 26 05:16:53 »
Hello Juan

To increase colour sat I followed one of your other tutorials and created a mask with atrous wavelet transform  ( Had a few tries at this before I found one that worked) and after applying the mask used the curves option to apply a  strong sat stretch

It would be nice to see some other worked examples,  So come on world send a image to Juan and let him work some magic

Regards harry
Harry Page