Author Topic: Work Flow  (Read 11082 times)

Offline Jack Harvey

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 975
    • PegasusAstronomy.com & Starshadows.com
Work Flow
« on: 2007 December 02 08:40:04 »
To carry this discussion one step further I would like to list my new proposed work flow for using linear processing and see if Juan or others at PI could hopefully keep me straight.

Step 1  Preparation of Master Frames:  This is done in CCDStack with subframes calibrated, registered, normalized and subjected to a noise reduction alogrithm before being combined by Mean function into a Master frame (Red, Green, Blue, Luminance and sometime Ha).

Step 2  Image Processing in PixInsight:
1.  Open Master frames In PI and use STF to stretch or use linear Histogram stretch
2.  Register Master frames with Dynamic Alignment tool
3.  Noise reduction of Lum and Chrominance using ACDNR
4.  Use of wavelets, HDR or deconvolution to refine data
5.  Combine Master frames by:
         a.   Use of LRGB Combine
          b.  Or Combine RGB with LRGB Combine and then use Pixel Math to add Lum or HaRed, etc
6  Now stretch image using gamma and or curves
7.  Saturation if needed
8.  Clone tool and cropping

Please feel free to critique and correct.  Thanks

   
Para llevar a cabo el debate un paso más me gustaría a mi lista nueva propuesta para el uso de flujo de trabajo de procesamiento lineal y ver si Juan o en otros PI, espera, me recta.

Paso 1 Preparación de Master Frames: Esto se hace en CCDStack con subframes calibrado, registrados, normalizado y sometido a una reducción de ruido alogrithm antes de ser combinado por Mean función en un marco Master (Rojo, Verde, Azul, y en algún momento de luminancia Ha).

Paso 2 Procesamiento de Imágenes en PixInsight:
1. Open Master En los marcos de PI y el uso STF para estirar o el uso tramo lineal Histograma
2. Registro Master marcos, con la herramienta Dynamic Alineación
3. La reducción de ruido de Lum y Chrominance utilizando ACDNR
4. El uso de los wavelets, HDR o deconvolución para perfeccionar los datos
5. Combine Master marcos por:
       A. El uso de combinar LRGB
       B. Combine RGB con LRGB Combine y luego usar Pixel Math añadir Lum o HaRed, etcétera
6 Ahora estirar la imagen o el uso de rayos gamma y curvas
7. Saturación de ser necesario
8. Clonar y herramienta de cultivo

Por favor, siéntase libre de la crítica y correcta. Gracias
Jack Harvey, PTeam Member
Team Leader, SSRO/PROMPT Imaging Team, CTIO

Offline vicent_peris

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 988
    • http://www.astrofoto.es/
Work Flow
« Reply #1 on: 2007 December 02 12:48:05 »
Hi Jack,


My experience tells me that reducing the noise in luminance before extracting all the information of it it's a bad idea. Usually you will delete hidden information with the noise reduction step, because these processing techniques are designed to show the image without noise in its actual state.

However, chrominance noise is another matter... Once you raise color saturation you will degrade color quality with (usually) no possible solution.That's because, when you have noise in the chrominance, raising the color saturation will separate the hues of each pixel. So it's better to reduce the noise in chrominance and, after, doing the color processing.

My 0.02!
Vicent.



PS: In our last Holmes image, the color signal was *very* weak in the nucleus. Trust me, the chrominance noise was horrible in the central area! It would be impossible to shwo the brown and blue hues raising at first the color saturation.

Offline Jack Harvey

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 975
    • PegasusAstronomy.com & Starshadows.com
work flow
« Reply #2 on: 2007 December 02 14:14:41 »
Vincent  Thanks for taking time to reply.  Makes lots of sense and I will incorporate that into my process.
Jack Harvey, PTeam Member
Team Leader, SSRO/PROMPT Imaging Team, CTIO

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Work Flow
« Reply #3 on: 2007 December 02 14:34:58 »
This is my proposed workflow for a LRGB image:

1. Open Master frames In PI and use STF to stretch (or use linear Histogram stretch [1])
2. Register Master frames with Dynamic Alignment tool
3. RGB combination (LRGBCombination or PixelMath or ChannelCombination)
4. Define a suitable RGBWorkingSpace (e.g., 1:1:1 luminance ratios) [2]
5. LRGBCombination [3]
6. Use of wavelets, or deconvolution to refine data - only for the luminance.
7. Now stretch image using HistogramTransform
8. Apply HDRWaveletTransform if required/desired/appropriate
9. Saturation if needed [4]
10. Noise reduction of Lum and Chrominance using ACDNR or/and GREYCstoration
11. Final stretch with HistogramTransform and/or CurvesTransform
12. Clone tool and cropping

Notes:

[1] In general, don't do that :) Anyway, if you stretch the histogram, never clip the highlights.

[2] Consider also more refined luminance ratios, as giving more weight to red and less to green. This depends on the objects. In this step, you should define a linear RGB working space (with gamma = 1). Otherwise you'll be breaking the linearity of the data.

[3] Consider applying a saturation boost and chrominance noise reduction with LRGBCombination.

[4] If you did [3] well, you probably don't need too much saturation at this stage.

Note that Instead of applying wavelets/deconvolution before LRGB, I personally prefer to do that after LRGB combination, immediately before the histogram transform. This has an important advantage: you can apply wavelets/deconvolution to the luminance and see the true effect for the RGB color image.

Having said all of that, of course each image is a completely different problem and may pose completely different challenges. This recipe is a simplification and must not be taken as the "true" path to follow .

Another 0.02 ;)
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Jack Harvey

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 975
    • PegasusAstronomy.com & Starshadows.com
work flow response
« Reply #4 on: 2007 December 02 15:00:50 »
I certainly agree that each image is unique and requires some special steps or rearrangement of the work flow.  Its just nice to have a general plan of attack in mind.  Thanks.  Lets see we have 0.02 and 0.02 for a total of 0.04 cents - others?
Jack Harvey, PTeam Member
Team Leader, SSRO/PROMPT Imaging Team, CTIO

Offline David Serrano

  • PTeam Member
  • PixInsight Guru
  • ****
  • Posts: 503
Work Flow
« Reply #5 on: 2007 December 03 00:37:20 »
Quote from: "Juan Conejero"
3. RGB combination (LRGBCombination or PixelMath or ChannelCombination)
4. Define a suitable RGBWorkingSpace (e.g., 1:1:1 luminance ratios) [2]
5. LRGBCombination [3]


I understand that after step 3, all available channels (red, green and so on) have been combinated into a single image, so I can't grasp the purpose of the step 5. I guess that it may have something to do with luminance, due to the existence of the step 4 in between.


Quote from: "Juan Conejero"
12. Clone tool and cropping


Why not do the cropping this at the beginning, thus saving processing time for portions of the image that won't be present in the final image? I think it would be best after step 3, so you perform the cropping only one time and while you're at it, you can get rid of the useless information that lies at the borders (channels not fitting perfectly, etc).

Just my 2 euro cents ;).
--
 David Serrano

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Work Flow
« Reply #6 on: 2007 December 03 01:22:16 »
Hi David :)

Quote
I understand that after step 3, all available channels (red, green and so on) have been combinated into a single image, so I can't grasp the purpose of the step 5. I guess that it may have something to do with luminance, due to the existence of the step 4 in between.


Correct. We first create a RGB composite image with individual red, green and blue components (which are grayscale images at this point), set a linear RGB working space (to ensure linear CIE color space transformations), and then replace the "bad" luminance in the RGB composite with a "good" luminance which is supposed to improve resolution.

Of course, with narrowband images things are different, mainly because we have no separate luminance (can we have one?) and because we have to apply an arbitrary palette, so the best option is usually PixelMath.

Quote
Why not do the cropping this at the beginning, thus saving processing time for portions of the image that won't be present in the final image? I think it would be best after step 3, so you perform the cropping only one time and while you're at it, you can get rid of the useless information that lies at the borders (channels not fitting perfectly, etc).


I agree completely. And in my workflow I forgot to mention something *very* important: cosmetic issues like cosmic rays, bad pixels, etc should be fixed before detail enhancement, since these nuisances often turn into beasts due to ringing.

Quote
my 2 euro cents Wink.


Hey, Jack, now you have 0.7 USD!  :lol:
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline avastro

  • PixInsight Addict
  • ***
  • Posts: 181
    • http://astrosurf.com/avastro/
Work Flow
« Reply #7 on: 2007 December 03 01:58:30 »
Very interesting indeed,
Work flow depends on the nature of the image and data, but the experience accumulated by the PT team as some PixInsihgt wizards can avoid us to go the wrong path, even if I learn always something doing mistakes.
The difficulty is to understand were I started to be wrong?

I know that The PTteam is very busy but it’ll be worth to include a tutorial talking about the different steps of a logical workflow, it's partly done with this thread, that will help a lot the learning process of PixInsight.

The fact to work with linear data is in my experience something unique within the software mainly utilized to  astroimage processing, including PS !
An other 0.02 € cents.

Antoine
Antoine
Lentin Observatory
http://www.astrosurf.com/avastro/

Offline Jack Harvey

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 975
    • PegasusAstronomy.com & Starshadows.com
.07 Euros
« Reply #8 on: 2007 December 03 07:10:09 »
This is great I am glad we are getting so much participation.  I did not expect there to be a "correct answer" but a variety of correct answers, and that is what we are getting.  This gives lots of food for thought and choices in the processing flow.

You are correct with the observation of linear processing being foreign to most astroimages.  When I discussed this with our SSRO group (all PS users and very accomplished processors) you would have thought I was a heretic.  But with persistance I am at least not going to be burned at the stake<G>.

I hope we get more replies.  Come on Carlos how about a few Chilean pesos worth???


---------------------------

   
Se trata de una gran Me alegro de que recibimos tanta participación. No esperamos que haya una "respuesta correcta", pero una variedad de respuestas correctas, y eso es lo que están consiguiendo. Esto le da mucha materia para la reflexión y las opciones en el procesamiento de flujo.

Tienes razón con la observación de la transformación lineal extranjeros a la mayoría astroimages. Cuando me di examinado esto con la SSRO grupo (todos los usuarios de PS muy logrado y procesadores) que habría pensado que era un hereje. Pero con persistencia estoy por lo menos no va a ser quemado en la hoguera <G>.

Espero que recibir más respuestas. Vamos Carlos acerca de cómo unos pocos pesos chilenos vale la pena?
Jack Harvey, PTeam Member
Team Leader, SSRO/PROMPT Imaging Team, CTIO

Offline Carlos Milovic

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2172
  • Join the dark side... we have cookies
    • http://www.astrophoto.cl
Work Flow
« Reply #9 on: 2007 December 03 08:13:53 »
Ok ok, here I am.

As a personal philosophy, I am strongly against rigid workflows, becouse image processing is not a cooking recipe. It is far more important knowing what the different tools (processes) do, and why we should use them, so we call them if we feel that it is the right time. A image is like a sculpture, we follow a path looking at the material, and bringing the artwork to the surface  :)

Well, having stablished that, I divide my workflow into several groups:

- Calibration
The main idea behind this is to prepare data to the non linear modifications (histogram stretch). So, the word calibration is taking in a broader acception than just bia-darks-flats.

All the processes applied at this stage are done to the linear data.

Let's menction a few steps that we may follow at this stage:
If we have calibration images (bias, darks and flats), apply them (I'll asume that you know how to do that with PixelMath, or use DeepSkyStacker, for example).
Align and stack all the frames.
Crop the result to the intersection of the frames (to avoid strange pixels at the boundaries).
Use the CloneTool to erase hot pixels, cosmic rays, etc.
Deconvolucionate data. This is considered as a calibration process, becouse the idea behind deconvolution is to restore data that was "blured" becouse of atmospheric turbulence or lens distortions. Iw we deconvolutionate color data, a different PSF is likely to be found, so I think that it is best to apply it to each filter data in separate images.
Color calibration and LRGB combine.
Deal with gradients (either DBE or ABE). Depending on the source, divide or substract the model.

- Main Adjustments

The next stage contains all the color, contrast and brightness adjustments that will be applied to the image, turning it close to the final look.
We start applying the HistogramTransform, or the AutoHistogram processes, to define the dynamic range limits. This means, set the black and white point. As a general recomendation, I suggest not to clipp a considerable amount of data. Since noise is stronger at the shadows, I ussually clip up to a 0.015% there. Hihghlights, by the other hand, are almost pure data, so no clipping or a very low one should be used. My "magical number" is 0.005%. It works well with film data :) Then, with either of those processes, raise the middtones balance so the background turns into the "correct" brightness range. You may neutralize the background at this step.
Next, comes all the curves adjustments. Slight R, G or B changes should be applied if a color balance is needed. The brightness and contrast adjustment is performed using the L channel. Color saturation is better handled with the c channel.
After those two fundamental processes, starts the fun ;) There are many ways to further modify the image (ExponentialTransforms and ColorSaturation, for example). Make use of masks if needed.

- Noise Reduction

Simple put, make use of PI's arsenal to improve SNR ratio. Don't be too aggressive, but try to preserv all the structures. We may use different tools, and iterate, to get the desired result. Don't try to speed up things and be done with a unique application/process.

Usually my workflow at this stage is:
GRAYCStoration, with a inverted luminance mask.
ACDNR (stronger to the chrominance)
SCNR, to remove any green cast.

A short note about SCNR applied at the end. SCNR is very usefull to remove green hot pixels, and this kind of noise, and also green haloes at stars, due to chromatic aberrations. But, if we apply it before the other techniques, it may change the hue of the background. And, if we have chromatic aberrations, removing the green halo changes the profile of the stars, and theyr color becomes very hard to fix. So, if we left it to the end, data will be smoother, star colors will be more simmetric and ussually the later will not be affected significantly.

After the noise reduction processes, it is likelly that there is a slight contrast loss, and that is a bit of free space at the ends of the dynamic range, so, new histogram adjustments should be applied. Further adjustments are optional.

- Structure Enhancement

Under this I group all the techniques that enhances the structures. High pass filters, new deconvolutions and wavelets.
In my workflow, I separate small scale features from the medium/large ones. I process with wavelets (ATrousWavelets and/or HDRWT) and curves each one os these "subimages". The separation is done with morphological filters.

Again, after combining the different scales, new adjustments may be needed. At this stage of things, they are the final ones, so we should make sure that we are happy with the result.

If needed, a specific star shaping routine may be applied at the very end. This is done with a star mask (we may reuse the small scales subimage we had previously, modifying it with histogram, wavelets and morphological transforms).


So, here are my 10 pesos ;)
Regards,

Carlos Milovic F.
--------------------------------
PixInsight Project Developer
http://www.pixinsight.com

Offline Jack Harvey

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 975
    • PegasusAstronomy.com & Starshadows.com
RGBWS
« Reply #10 on: 2007 December 03 11:19:10 »
The RGB working space tool is not something I have been using.  Is its settings and uses covered in one of the tutorials?
Jack Harvey, PTeam Member
Team Leader, SSRO/PROMPT Imaging Team, CTIO

Offline Carlos Milovic

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2172
  • Join the dark side... we have cookies
    • http://www.astrophoto.cl
Work Flow
« Reply #11 on: 2007 December 03 13:58:54 »
Look for older tutorials. I'm almost sure that it has been covered.

Anyway, changing the RGBWS is quite straightforward. Just change the weights of the R, G and B channels, for a better luminance calculation. For example, a 1:1:1 ratio may be good for rich colorful objects. If you have a large H-alpha nebulosity, maybe a .4:.3:.3 ratio is better suited. Anyway, in my humild opinion, even when changing the RGBWS may yield better results, I don't consider it to be critical in most cases. If you feel ready to explore changing those settings, go ahead.
Regards,

Carlos Milovic F.
--------------------------------
PixInsight Project Developer
http://www.pixinsight.com

Offline C. Sonnenstein

  • PixInsight Addict
  • ***
  • Posts: 262
    • http://astrosurf.com/astro35mm
Work Flow
« Reply #12 on: 2007 December 03 14:02:18 »
Hi Jack:

You can find some information about RGBWorkingSpace process in the PixInsight LE documentation:

http://pixinsight.com/doc/LE/14_color_spaces/color_spaces.html

Remember that if you do this with linear data, you must apply the gamma function set to 1.0 to avoid lost the data linearity.
Carlos Sonnenstein

Offline bosch

  • PixInsight Addict
  • ***
  • Posts: 123
Work Flow
« Reply #13 on: 2007 December 03 14:56:41 »
Yo siguiendo más o menos el esquema de Juan Conejero, en el punto númeo 6 también aplico algun DBE.
También entre le pasos 4 y 5 (despues de poner a 1 los paramatros del working space) hago una homogeneización del fondo. Antes modificando los deslizadores del ChanelMatch àra tener la lectura de los tres canales RGB igualados, pero ahora uso el método de $target-(med(background)-... propuesto por J.Conejero en su reciente tutorial.

Offline OriolLehmkuhl

  • PixInsight Addict
  • ***
  • Posts: 177
    • http://www.astrosurf.com/brego-sky
Work Flow
« Reply #14 on: 2007 December 04 03:47:38 »
Hello,

Although we do not have to much experience with the issue of combination LRGB  (we only have two images with our new Artemis). We use the following methodology:

1 RGB:
   1.1.  Open Master frames use STF to stretch
   1.2.  Register Master frames with Dynamic Alignment tool
   1.3.  RGB combination (with PixelMath)
   1.4.  Define a suitable RGBWorkingSpace
   1.5.  Background neutralitzation (channel math)
   1.6.  Gradient substraction with DBE
   1.7.  Clone tool (cosmetic corrections)
   1.8.  Now stretch image using HistogramTransform
   1.9   Curves or histograms adjustments to achive a 'nice' color balance.
   1.10. Noise reduction using ACDNR or/and GREYCstoration
   1.11. Saturation if needed
   
2 L:
   2.1.  Open Master frames use STF to stretch
   2.2.  Register Master (L with RGB) with Dynamic Alignment tool
   2.3.  Clone tool (cosmetic corrections)
   2.4.  Deconvolution to refine data
   2.5.  Now stretch image using HistogramTransform
   2.6.  HDR (usually one or to steps) (with star mask)
   2.7.  Curves to enhance high frequency structures (with star mask)


3 LRGB:
   3.1   LRGB Combination (color saturation and chrominance noise reduction)
   3.2   Star enhancement (morphological filters and wavelets)
   
   
The images that we have tried so far were emission nebulae. So, Ha data has been used as luminance.  Our next goal will be the horse head nebula complex. The idea is to make an HaL-RGB. So the luminance will be a mixture of Ha and L (filter IR-block) data.

Regards,

Oriol & Ivette