Author Topic: Simple but detailed tutorial to process a LRGB of a galaxy step by step  (Read 8802 times)

Offline mschwarz

  • Newcomer
  • Posts: 6
    • Astrophotography - Manfred Schwarz
Hello,

If you are not so experienced with PI, following tutorial of LRGB processing of a galaxy could be useful for you.

http://www.astrophoto.at/PixInsight/

Best regards,
Manfred

Offline Nocturnal

  • PixInsight Jedi Council Member
  • *******
  • Posts: 2727
    • http://www.carpephoton.com
I haven't looked yet but thanks very much for putting this together!
Best,

    Sander
---
Edge HD 1100
QHY-8 for imaging, IMG0H mono for guiding, video cameras for occulations
ASI224, QHY5L-IIc
HyperStar3
WO-M110ED+FR-III/TRF-2008
Takahashi EM-400
PIxInsight, DeepSkyStacker, PHD, Nebulosity

Offline Andres.Pozo

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 927
I've done a fast read of the PDF. I think it is a good tutorial.
However, I think that the color calibration part could be done better using other tools in Pixinsight:
First, you should apply DBE before any color calibration since the gradients affect the color of the image differently. The steps 10 and 12 should be done in reverse order.
Second, with BackgroundNeutralization and ColorCalibration you can get much better results than manually adjusting the histogram for aligning the peaks.

So instead of
   10) Calibrate RGB channels with Histogram transform
   11) Crop
   12) Remove gradient with DBE

I would use this sequence:
   10) Crop
   11) Remove gradient with DBE
   12) BackgroundNeutralization (although after a DBE is nearly redundant) using a preview covering a background zone as the reference image
   13) ColorCalibration using a preview covering the whole galaxy as the reference image and the preview of step 12 for the background reference.


EDIT: The steps 13 and 17 for scaling the brightness can be done much easily with the script MaskedStretch that automatizes the process that you describe in the PDF.

Offline Nocturnal

  • PixInsight Jedi Council Member
  • *******
  • Posts: 2727
    • http://www.carpephoton.com
Good comments.

DBE by default performs BN so unless you uncheck the box it'll be done and therefore a NOP if done again. You definitely want to crop before doing anything else. The pixels you're about to crop away will affect processing so get rid of them. The only time you want to do DBE and then crop is when you need those areas to place samples. I imagine that's rare.
Best,

    Sander
---
Edge HD 1100
QHY-8 for imaging, IMG0H mono for guiding, video cameras for occulations
ASI224, QHY5L-IIc
HyperStar3
WO-M110ED+FR-III/TRF-2008
Takahashi EM-400
PIxInsight, DeepSkyStacker, PHD, Nebulosity

Offline Andres.Pozo

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 927
You definitely want to crop before doing anything else.

I am a bit radical with cropping early. I crop before aligning  ;)
I align all the images against a reference image that is already cropped. For making the reference image I do a fast align and integration of a representative subset of all the frames. Then crop the result of this first integration.

The advantages of this method are:
  • The individual frames are interpolated only once.
  • The final alignment is done using a reference image with better SNR that any of the frames.
  • When playing with the rejection parameters, the statistics are calculated only on the useful part of the image

Offline Nocturnal

  • PixInsight Jedi Council Member
  • *******
  • Posts: 2727
    • http://www.carpephoton.com
Why does the SNR of the reference image matter? All you need are accurate star positions, right?
Best,

    Sander
---
Edge HD 1100
QHY-8 for imaging, IMG0H mono for guiding, video cameras for occulations
ASI224, QHY5L-IIc
HyperStar3
WO-M110ED+FR-III/TRF-2008
Takahashi EM-400
PIxInsight, DeepSkyStacker, PHD, Nebulosity

Offline Andres.Pozo

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 927
Why does the SNR of the reference image matter? All you need are accurate star positions, right?
With better SNR the position of the stars in the reference image can be determined with more precision, I think. The algorithm for finding the centroid of the stars works better with a flatter background (less noise).
I have not measured the gain, and probably it is quite small, but since if I am going to align all the frames against the same reference, why not use the best image possible.
Aligning all the frames against the same reference makes easier the following steps and also the images are interpolated only once.

Offline Nocturnal

  • PixInsight Jedi Council Member
  • *******
  • Posts: 2727
    • http://www.carpephoton.com
I think that's a non-issue. Since you're using dozens if not hundreds of alignment stars their relative positions are already averaged. In any case, restacking all images with the same stacked image as reference will never be more accurate than the stack itself. You can't make information.

Clearly you're welcome to do as you please but I'm afraid in this case you're spending time for no good reason. I'd be interested to hear other opinions on this or your math/statistical explanation why this is a good method :)
Best,

    Sander
---
Edge HD 1100
QHY-8 for imaging, IMG0H mono for guiding, video cameras for occulations
ASI224, QHY5L-IIc
HyperStar3
WO-M110ED+FR-III/TRF-2008
Takahashi EM-400
PIxInsight, DeepSkyStacker, PHD, Nebulosity

Offline Andres.Pozo

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 927
I think that's a non-issue. Since you're using dozens if not hundreds of alignment stars their relative positions are already averaged. In any case, restacking all images with the same stacked image as reference will never be more accurate than the stack itself. You can't make information.

I agree with you in your first point: after averaging hundreds of alignment stars the residual error usually is very small. However not always you can find hundreds of stars in an image. If you are doing narrowband with an long focal, the number of usable stars can be small.

Your second argument (you can't make information) does not seem as clear to me. The aligment is usually done using the best frame as reference. The alignment is done by independent pairs, each frame against the reference frame. For each aligned frame you have a small alignment error. When you integrate all the frames, the errors get averaged. If you somehow could decrease the alignment error for each frame, after stacking the averaged error would be smaller.

The question is ¿using an averaged image as the reference frame makes smaller the aligment error? I am not sure of this and I don't know how to prove it mathematically. However I have done a couple of tests this afternoon and it seems that at least in these tests the aligment error is smaller when using the averaged reference frame. The tests consist in aligning the same frame using as reference frame both an averaged image and one of the frames cropped to the same area (to the nearest pixel) as the averaged image. Attached to this message there is the comparison between the result of the alignment of two frames using the two different references. The RMS error using the averaged reference is clearly lower.

Does somebody (with fresher mathematical knowledge than me) know how to address mathematically this problem in order to determine if my intuition is right or wrong?

Offline mschwarz

  • Newcomer
  • Posts: 6
    • Astrophotography - Manfred Schwarz
Hello Andres,

thank you for your comment!
You see I'm a novice of PI. I will change the sequence.
I didn't know the script MaskedStretch before, thank you for the tip.

regards, Manfred