Author Topic: A few PI new user questions  (Read 4943 times)

Offline h0ughy

  • PixInsight Addict
  • ***
  • Posts: 226
A few PI new user questions
« on: 2009 August 27 17:03:55 »
How do you process a DSLR raw file in Pixinsight?

This is just one of those loaded questions, I suppose I am after a technical check list from start to finish.

Scenario – lets say a canon 350D or 20D – you have taken 4 hours worth of a heavily starred field with nebulosity within it, taken darks, flats and bias shots……

Can Pixinsight stack and blend these files?  Or do you have to use Deep sky stacker or Images Plus etc?

Secondly the process to process the stacked and calibrated shot?  Say there is a gradient apparent in the image, but heaps of nebulosity and a lot of different size stars etc.

Do you start with DBE first or levels?  When will the histogram function come into play and how does the screen transfer function work to apply it to the image?  (I have done this a few times getting great results but then when I save the image and view it on the outside in another package or internet viewer the images has lots everything I thought I did  to it?

A guiding hand would be appreciated

Doghouse Observatory

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: A few PI new user questions
« Reply #1 on: 2009 August 28 16:58:08 »
I'll try to answer your questions.

Quote
Can Pixinsight stack and blend these files?  Or do you have to use Deep sky stacker or Images Plus etc?

PixInsight can align and stack your images. The required tools are, respectively, StarAlignment and ImageIntegration. Honestly, I think both tools are state-of-the-art implementations of the best algorithms and techniques currently available to carry out these tasks.

The only thing PixInsight can't do —for now— is calibrating your images by applying flat frames, bias frames and dark frames. Well, actually, PixInsight can be used for calibration, but through manual procedures that require much more work than automatized systems (although they can be implemented to achieve more accurate results). As you probably know, we are working on a calibration tool for PixInsight. Hopefully a first version will be available very soon.

In the meanwhile I recommend the excellent DeepSkyStacker application for calibration of DSLR images. Of course DSS can also align and stack your images if you prefer to use it for these tasks instead of PixInsight.

Quote
Do you start with DBE first or levels?

You can do it in both ways, but my advice is to apply DBE while the image is linear, that is before any nonlinear stretch with HistogramTransformation (levels? huh? whatsthat?  ;D )

Quote
When will the histogram function come into play and how does the screen transfer function work to apply it to the image?

HistogramTransformation (HT hereafter) is the best tool to apply a nonlinear stretch. A nonlinear stretch is necessary because the raw data cannot be represented on display media. That's mainly because our vision system is nonlinear (and display devices are designed to mimic its response), and also due to the typical distribution of values in a linear deep-sky image, which is inherently underexposed.

You apply a nonlinear stretch by dragging the midtones balance triangle control to the left on HT. In this way HT remaps all pixel values in the image by applying a nonlinear curve (you can see the curve drawn on HT's graphic). The curve enhances the shadows and the midtones but does not saturate the highlights (if properly used, of course). After applying a nonlinear stretch in this way, the data are representable on display media, so you can see the image.

Now the problem is that there are some procedures and tools that work much better when applied to linear data. Deconvolution, for example, doesn't make any sense but if applied to linear data. Color calibration is also impossible with nonlinear data. Background modeling with DBE also works better with linear images, as happens with many wavelet-based processing techniques.

But to apply those procedures, we need to see the image and what happens when we apply them. Here is where STF (Screen Transfer Function) comes into play. STF applies a nonlinear stretch (a complete histogram transformation, actually) to the screen rendition of the image, but it does not modify actual image data in any way. For that reason, STF is extremely useful to work with linear images: with STF you can anticipate what will happen when you actually apply a nonlinear transformation, while you work with your linear image applying color calibration, wavelets, deconvolution, etc. STF is just a visualization helper, not a true processing tool.

Hope this will clarify things a little.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline h0ughy

  • PixInsight Addict
  • ***
  • Posts: 226
Re: A few PI new user questions
« Reply #2 on: 2009 August 28 20:17:34 »
Thanks Juan - the guiding hand is much appreciated.  So would be the entire processing in PI - the interface for DSLR raw files would be terriffic.

i still have to get my head around what is linear or nonlinear, and for that matter what the functions actually do to an image.  I thought i saw somewhere with a video tutorial that either you or Harry did where the STF was dragged to the desktop then applied to an image after something was done.  Might have been dreaming as well?  It frustrated me so much as i was able to get the image looking brilliant, and thought i saved it only to realise later that i really didnt. I am still holding the door open on the brain production line waiting for a reject LOL.

the people around me at the post processing after a hard nights photon catching at Queensland astrofest were very interested in PixInsight, and after meeting up with Eddie Trimarchi and having a quick personal 1 to 1 lesson, but a brewed coffee called him back to his tent

I'll try to answer your questions.

Quote
Can Pixinsight stack and blend these files?  Or do you have to use Deep sky stacker or Images Plus etc?

PixInsight can align and stack your images. The required tools are, respectively, StarAlignment and ImageIntegration. Honestly, I think both tools are state-of-the-art implementations of the best algorithms and techniques currently available to carry out these tasks.

The only thing PixInsight can't do —for now— is calibrating your images by applying flat frames, bias frames and dark frames. Well, actually, PixInsight can be used for calibration, but through manual procedures that require much more work than automatized systems (although they can be implemented to achieve more accurate results). As you probably know, we are working on a calibration tool for PixInsight. Hopefully a first version will be available very soon.

In the meanwhile I recommend the excellent DeepSkyStacker application for calibration of DSLR images. Of course DSS can also align and stack your images if you prefer to use it for these tasks instead of PixInsight.

Quote
Do you start with DBE first or levels?

You can do it in both ways, but my advice is to apply DBE while the image is linear, that is before any nonlinear stretch with HistogramTransformation (levels? huh? whatsthat?  ;D )

Quote
When will the histogram function come into play and how does the screen transfer function work to apply it to the image?

HistogramTransformation (HT hereafter) is the best tool to apply a nonlinear stretch. A nonlinear stretch is necessary because the raw data cannot be represented on display media. That's mainly because our vision system is nonlinear (and display devices are designed to mimic its response), and also due to the typical distribution of values in a linear deep-sky image, which is inherently underexposed.

You apply a nonlinear stretch by dragging the midtones balance triangle control to the left on HT. In this way HT remaps all pixel values in the image by applying a nonlinear curve (you can see the curve drawn on HT's graphic). The curve enhances the shadows and the midtones but does not saturate the highlights (if properly used, of course). After applying a nonlinear stretch in this way, the data are representable on display media, so you can see the image.

Now the problem is that there are some procedures and tools that work much better when applied to linear data. Deconvolution, for example, doesn't make any sense but if applied to linear data. Color calibration is also impossible with nonlinear data. Background modeling with DBE also works better with linear images, as happens with many wavelet-based processing techniques.

But to apply those procedures, we need to see the image and what happens when we apply them. Here is where STF (Screen Transfer Function) comes into play. STF applies a nonlinear stretch (a complete histogram transformation, actually) to the screen rendition of the image, but it does not modify actual image data in any way. For that reason, STF is extremely useful to work with linear images: with STF you can anticipate what will happen when you actually apply a nonlinear transformation, while you work with your linear image applying color calibration, wavelets, deconvolution, etc. STF is just a visualization helper, not a true processing tool.

Hope this will clarify things a little.

« Last Edit: 2009 August 28 23:38:51 by h0ughy »
Doghouse Observatory

Offline Cheyenne

  • PixInsight Addict
  • ***
  • Posts: 146
    • Link to Picasa gallery of my astronomy photos
Re: A few PI new user questions
« Reply #3 on: 2009 August 28 20:20:49 »
You might want to browse through this thread -> http://pixinsight.com/forum/index.php?topic=1273.0 on image calibration
Cheyenne Wills
Takahashi 130 TOA
Losmandy G11
SBIG STF8300M
Canon 20Da
SBIG ST-i + openPHD for autoguiding

Offline h0ughy

  • PixInsight Addict
  • ***
  • Posts: 226
Re: A few PI new user questions
« Reply #4 on: 2009 August 28 23:04:27 »
You might want to browse through this thread -> http://pixinsight.com/forum/index.php?topic=1273.0 on image calibration
thanks Mate-  i will have a wade through that and thanks.  A lot to take in but i am sure that practice makes frustrated   perfect! ;)
Doghouse Observatory

Offline Harry page

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1458
    • http://www.harrysastroshed.com
Re: A few PI new user questions
« Reply #5 on: 2009 August 29 09:40:09 »
Hi Houghy

I have made the same mistakes as you and many others beside :D

To be clear a linear image is usually a unstretched image , which for most people is what they start working with after flat fielding and subtracting darks
and again it usually a image that you can see very little off.

After stretching with histogram transformation ( Now permanently altered ) the image is now non linear

Also what I have done is forget to disable the STF, giving the false impression that the image has been stretched and of course as the STF is only a temporary alteration on your screen ,saving it at this time will only save the permanent alterations you have made , not the STF settings
« Last Edit: 2009 August 29 15:03:55 by Harry page »
Harry Page

Offline h0ughy

  • PixInsight Addict
  • ***
  • Posts: 226
Re: A few PI new user questions
« Reply #6 on: 2009 August 29 15:54:28 »
Thanks for the advice Harry.  Now if only there was a magic was of making it a true image...... LOL i would have been set.  Oh well, still learning.


Hi Houghy

I have made the same mistakes as you and many others beside :D

To be clear a linear image is usually a unstretched image , which for most people is what they start working with after flat fielding and subtracting darks
and again it usually a image that you can see very little off.

After stretching with histogram transformation ( Now permanently altered ) the image is now non linear

Also what I have done is forget to disable the STF, giving the false impression that the image has been stretched and of course as the STF is only a temporary alteration on your screen ,saving it at this time will only save the permanent alterations you have made , not the STF settings
Doghouse Observatory