Author Topic: PixInsight 1.5.2: Prerelease Information  (Read 23230 times)

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
PixInsight 1.5.2: Prerelease Information
« on: 2009 May 26 16:29:32 »
[Texto en Español al final]

The upcoming version 1.5.2 of PixInsight is mainly a bug fix / optimization release. However, I have decided to add a few new features that were initially scheduled for versions 1.6 and 1.7.

Perhaps the most important new feature is a significant improvement to the Deconvolution process. This improvement consists of a completely redesigned/rewritten deconvolution deringing engine. The new deringing algorithm has been devised by PTeam member Vicent Peris, and is similar to the deringing features implemented in the ATrousWaveletTransform, UnsharpMask and RestorationFilter tools. In this post I'll show you a brief processing example that demonstrates the power of these new deringing algorithms.

The screenshot below shows the original image. It is an excellent RGB CCD image of the Leo triplet region by Jordi Gallego, who has kindly given me permission to use his raw data for demonstration purposes. Image registration and integration have been carried out in PixInsight 1.5.

http://forum-images.pixinsight.com/legacy/1.5.2-preview/Decon01.jpg

As shown on the figure, our first step is to define a linear, uniform RGB working space (RGBWS). A linear RGBWS defines a gamma value of 1.0. A uniform RGBWS has identical luminance weights for the three nominal RGB channels. Linearity is a necessary condition for a meaningful (i.e., physically justified) deconvolution. Uniformity is advisable in this case because we are going to deconvolve the luminance, so we want to gather as much information as possible in the CIE Y component (which is the linear luminance).

Note that the image seems almost grayscale in the screenshot above. This is due to the aggressive screen transfer function (STF) being applied.

The next figure shows our working preview with the original data. Of course the image is linear; we can see it thanks to the active STF. Note that the quality of the data, in terms of signal-to-noise ratio, is exceptional.

http://forum-images.pixinsight.com/legacy/1.5.2-preview/DeconOriginal.jpg

The next screenshot is the result of 30 regularized Richardson-Lucy iterations without deringing. The Gibbs phenomenon in its whole splendor. Of course, the resulting image is completely useless.

http://forum-images.pixinsight.com/legacy/1.5.2-preview/DeconNoDeringing.jpg

Below is the result of the same deconvolution with the new deringing algorithm in action.

http://forum-images.pixinsight.com/legacy/1.5.2-preview/DeconWithDeringing.jpg

Any deringing mechanism tends to degrade deconvolution, because deringing actually works by limiting the edge enhancement effect locally. In the case of our algorithm, however, an efficient deringing can be achieved with minimal deconvolution degradation in most cases. On the screen shot above, pay special attention to the star size reduction effect, which is a natural consequence of deconvolution.

In addition to a global deringing algorithm, the new Deconvolution tool provides support-driven, local deringing. Both deringing mechanisms work in tandem. A deringing support is a special grayscale image that defines pixels where additional deringing action must take place. In the case of deep-sky images, deringing supports are usually star masks. Below you can see the star mask that I generated with the StarMask tool.

http://forum-images.pixinsight.com/legacy/1.5.2-preview/DeconStarMask.jpg

The advantage of working with both types of deringing is that we can build a star mask to provide special protection to high-contrast image features, as bright stars, while global deringing acts elsewhere. The benefit of this divide-and-conquer strategy is twofold. On one hand, the star mask can be very permissive, so it is very easy to generate (we don't have to include all stars in the mask, but just the brightest ones). On the other hand, in general we can decrease global deringing strength, which in turn prevents degradation of the deconvolution process.

Below you can see the result of regularized Richardson-Lucy deconvolution, 30 iterations, with both types of deringing active. There are no ringing artifacts, and the deconvolution process works practically as if no deringing had been applied.

http://forum-images.pixinsight.com/legacy/1.5.2-preview/DeconWithDeringingSupport.jpg

Finally, below you can see what happens if we turn off regularization, which gives you an idea of the efficiency of our regularized deconvolution implementation.

http://forum-images.pixinsight.com/legacy/1.5.2-preview/DeconNoRegularization.jpg


=====================================================================


La próxima versión 1.5.2 de PixInsight es principalmente para corrección de errores y optimización. Sin embargo, he decidido añadir algunas características nuevas que inicialmente habían sido planificadas para las versiones 1.6 y 1.7.

Quizá la característica nueva más importante es una mejora significativa en la herramienta de deconvolución (Deconvolution). Esta mejora consiste en un mecanismo de deringing completamente rediseñado. El nuevo algoritmo de deringing ha sido creado por Vicent Peris (PTeam/OAUV), y es similar a las funciones de deringing ya implementadas en las herramientas ATrousWaveletTransform, UnsharpMask y RestorationFilter. En este mensaje os mostraré un breve ejemplo de procesamiento que demuestra la potencia de estos nuevos algoritmos de deringing.

La siguiente copia de pantalla muestra la imagen original. Se trata de una excelente imagen CCD RGB de la zona del trío de Leo, por Jordi Gallego, quien amablemente me ha dado su permiso para utilizar sus datos originales con fines de demostración. El registro e integración de las imágenes ha sido realizado en PixInsight 1.5.

http://forum-images.pixinsight.com/legacy/1.5.2-preview/Decon01.jpg

Como se meustra en la figura, nuestro primer paso es definir un espacio de trabajo RGB (RGBWS) lineal y uniforme. Un RGBWS lineal define un valor de gamma de 1.0. Un RGBWS uniforme tiene idénticos pesos de luminancia para los tres canales nominales RGB. La linealidad es una condición necesaria para una deconvolución con justificación física. La uniformidad es recomendable en este caso porque vamos a deconvolucionar la luminancia, así que queremos recoger toda la información posible en la componente CIE Y (que es la luminancia lineal).

La imagen en la copia de pantalla anterior parece prácticamente una imagen en escala de grises. Esto se debe a la función de transferencia de pantalla (STF) agresiva que se está aplicando.

La siguiente figura muestra nuestro preview de trabajo con los datos originales. Por supuesto la imagen es lineal; sólo podemos verla gracias a la STF activa. La calidad de los datos, como se puede apreciar, es excepcional en términos de relación señal a ruido.

http://forum-images.pixinsight.com/legacy/1.5.2-preview/DeconOriginal.jpg

La siguiente copia de pantalla es el resultado de 30 iteraciones de Richardson-Lucy regularizado sin deringing. El efecto Gibbs en todo su esplendor. Por supuesto, el resultado es completamente inutilizable.

http://forum-images.pixinsight.com/legacy/1.5.2-preview/DeconNoDeringing.jpg

A continuación tenemos el resultado de la misma deconvolución utilizando el nuevo algoritmo de deringing.

http://forum-images.pixinsight.com/legacy/1.5.2-preview/DeconWithDeringing.jpg

Cualquier mecanismo de deringing tiende a degradar la deconvolución, puesto que el proceso de deringing en realidad trabaja limitando el efecto de realce de bordes localmente. En el caso de nuestro algoritmo, sin embargo, se puede conseguir un deringing eficiente con una degradación mínima de la deconvolución en la mayoría de los casos. En la copia de pantalla anterior, prestad atención al efecto de reducción en el tamaño de las estrellas, que es una consecuencia natural de la deconvolución.

Además de un algoritmo de deringing global, la nueva herramienta de deconvolución proporciona deringing local gestionado mediante soportes. Ambos mecanismos de deringing funcionan en tándem. Un soporte de deringing es una imagen en escala de grises especial que define los píxeles donde se va a aplicar un efecto de deringing adicional. En el caso de imágenes de cielo profundo, los soportes de deringing son normalmente máscaras de estrellas. A continuación podéis ver la máscara de estrellas que he generado con la herramienta StarMask.

http://forum-images.pixinsight.com/legacy/1.5.2-preview/DeconStarMask.jpg

La ventaja de trabajar con ambos tipos de deringing es que podemos construir una máscara de estrellas para proporcionar protección adicional en las estructuras de alto contraste, como estrellas brillantes, mientras que el deringing global actúa en toda la imagen. El beneficio de esta estrategia divide y vencerás es doble. Por una parte, la máscara de estrellas puede ser muy permisiva, de manera que resulta muy fácil generarla (no es necesario incluir todas las estrellas en la máscara, sino sólo las más brillantes). Por otra parte, en general podemos reducir la potencia del deringing global, lo cual impide la degradación del proceso de deconvolución.

A continuación podéis ver el resultado de 30 iteraciones de deconvolución Richardson-Lucy regularizada, con ambos tipos de deringing en acción. No hay artefactos de ringing, mientras que la deconvolución trabaja prácticamente como si no se estuviera aplicando deringing.

http://forum-images.pixinsight.com/legacy/1.5.2-preview/DeconWithDeringingSupport.jpg

Finalmente, podemos ver lo que ocurre si desactivamos la regularización, lo cual os puede dar una idea de la eficiencia de nuestra implementación de deconvolución regularizada.

http://forum-images.pixinsight.com/legacy/1.5.2-preview/DeconNoRegularization.jpg
« Last Edit: 2009 May 27 16:30:15 by Juan Conejero »
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline lucchett

  • PixInsight Old Hand
  • ****
  • Posts: 449
Re: PixInsight 1.5.2: Prerelease Information
« Reply #1 on: 2009 May 27 03:34:30 »
Hi Juan,
this is a great news.
when can we expect the new 1.5.2 to be released?

more, from the email below I Understand that your suggestion is to apply deconvolution to the RGB image before any background correction or color bias correction, am I right?

Thanks,
Andrea

Offline mmirot

  • PixInsight Padawan
  • ****
  • Posts: 881
Re: PixInsight 1.5.2: Prerelease Information
« Reply #2 on: 2009 May 27 07:04:20 »
Hi Juan,
this is a great news.
when can we expect the new 1.5.2 to be released?

more, from the email below I Understand that your suggestion is to apply deconvolution to the RGB image before any background correction or color bias correction, am I right?

Thanks,
Andrea

I would think backgound correction is a ? or a real no no. 
However, color bias correction and channel weighting( scaling) needs to be done in this example.
Basically, your deconvolving a the L product so I would think this is necessary first.

I assume that DBE is somewhat non linear. I think background normalizations and color calibration are linear.
Someone correct me if I am wrong here.


Hey, Juan can you have this ready in about 10 hours. I have was just about to do a decon on my lastest image ;D

Max

Offline Philip de Louraille

  • PixInsight Addict
  • ***
  • Posts: 289
Re: PixInsight 1.5.2: Prerelease Information
« Reply #3 on: 2009 May 27 12:13:35 »
This is a great question and, of course, it leads to more!
Being a "casual" astrophotographer, and not having taken any signal processing courses in the last, erh, 25 years, I am taken a bit aback by all these great functions/algorithms.
I am a bit lost as to which ones I should do first, whether some are always must-do and others "nice to do when..."
It would be great to have a flowchart indicating preconditions. For instance, that last question about deconvolution is right on mark: it looks like it can be done after image calibration (dark-flat-bias) but not after a non-linear procedure has been applied. And the background correction is kind of a flat correction, is it not? So, yes DBE is probably not linear but I think its application is trying to make the image more "linear" so ...
Anyways, you can see my "amateurish" problem.
Philip de Louraille

Offline Fco. Bosch

  • Member
  • *
  • Posts: 66
Re: PixInsight 1.5.2: Prerelease Information
« Reply #4 on: 2009 May 27 14:40:39 »
QUOD ERAT DEMOSTRANDUM ...
Fco. Bosch

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: PixInsight 1.5.2: Prerelease Information
« Reply #5 on: 2009 May 27 15:16:41 »
Hi Andrea,

Quote
when can we expect the new 1.5.2 to be released?

I still have to fight a couple problems with the Windows version. I think it should be released during the weekend, if nothing strange happens.

Quote
I Understand that your suggestion is to apply deconvolution to the RGB image before any background correction or color bias correction, am I right?

Not actually. As long as only linear transformations are applied, there is no problem to use deconvolution. What is important is that deconvolution doesn't make any sense but for linear images. If the image is nonlinear, then no PSF can be valid for all pixels simultaneously.

I prefer to apply gradient correction (with DBE or ABE, as appropriate) if necessary, then BackgroundNeutralization and ColorCalibration. The latter two processes are purely linear transformations. Subtracting a background model generated with DBE or ABE, if the model is correct, is also a linear operation. After these procedures, Deconvolution can be applied to the luminance of a RGB image. As I have explained in the example, it is very important to use a linear RGB working space, and check the "linear" option of Deconvolution. In this way the CIE Y component will be processed. CIE Y is given by the following expression:

Y = kR*Rg + kG*Gg + kB*Bg

where R, G and B are the red, green and blue pixel components, kR, kG and kB are the corresponding luminance weights, and g is gamma. If gamma is equal to 1, then CIE Y is a linear function of the RGB components, so we can deconvolve it.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: PixInsight 1.5.2: Prerelease Information
« Reply #6 on: 2009 May 27 16:03:06 »
Hi Max,

Quote
Hey, Juan can you have this ready in about 10 hours. I have was just about to do a decon on my lastest image ;D

Not in 10 hours indeed, but tomorrow morning (I live at UTC+2) I'll be glad to upload a working Deconvolution module, which all of you can use with the current 1.5 version :)

Quote
I assume that DBE is somewhat non linear.

It depends on how it is used. Let's say that we have a raw image that has been accurately flat-fielded. This image is linear and is not affected by multiplicative illumination variations (as vignetting for example). Then, if the image has a gradient (due to light pollution for example), it is an additive variation so we have:

I' = I + G

where I' is our calibrated linear image, I is the uniformly illuminated image (which we want to obtain), and G is the additive gradient. If we are able to generate an accurate background model G' (by interpolation with DBE for example), then G' will represent G very closely, so we can subtract it:

I ? I' - G'

and the resulting image will be linear (within some error bounds), since both I and G have been acquired simultaneously with the same linear sensor. So the key of gradient correction with DBE (or ABE) is in the accuracy of the background modeling process.
« Last Edit: 2009 May 27 16:21:13 by Juan Conejero »
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline mmirot

  • PixInsight Padawan
  • ****
  • Posts: 881
Re: PixInsight 1.5.2: Prerelease Information
« Reply #7 on: 2009 May 27 21:42:43 »
This is a great question and, of course, it leads to more!
Being a "casual" astrophotographer, and not having taken any signal processing courses in the last, erh, 25 years, I am taken a bit aback by all these great functions/algorithms.
I am a bit lost as to which ones I should do first, whether some are always must-do and others "nice to do when..."
It would be great to have a flowchart indicating preconditions. For instance, that last question about deconvolution is right on mark: it looks like it can be done after image calibration (dark-flat-bias) but not after a non-linear procedure has been applied. And the background correction is kind of a flat correction, is it not? So, yes DBE is probably not linear but I think its application is trying to make the image more "linear" so ...
Anyways, you can see my "amateurish" problem.

Good questions. Phillip
First, You have to create calibrated sub exposures.
Next stacking "integration of the sub exposures into single exposure.  Usually, the images are registered, normalized and data rejection is applied to create your exposure. 

At this point you have a linear image. 

 
 
I generally will combine my color channels together at this point. 
If there are gradients do a DBE.  (DBE performs  background normalization is an automatic result of this process).
If you don't have gradients then you wound do background normalization.
I generally would do color adjustment/calibration at this point.
If you know your filter weights you might do this when you first combine the channels and tweak here.

These processes are linear since your basically adding, subtracting, multiplying the to the whole image. This is in contrast to non linear functions which are often referred to as stretching. These functions include curves, ddp, gamma, etc. Most of time the shape of histogram changes with a non linear functions.

We are almost always going to perform non linear functions with deepsky images.  This is where much of the cool stuff lives in the the imaging processing world.

We always start linear and end up non linear. Otherwise, stuff looks dark and flat.
There are a few operations where it is necessary (or at least recommended) to apply to a linear image. These include deconvolution and wavelets.
You can apply these function to non linear or stretched images. Some people do this and occasionally get a nice results.
However, there is a much higher risk of artifacts or increased noise in the finial product.

Remember, there are many people out there producing excellent images and do some of these steps in a slightly different order. There is more than one way to do well so naturally you will see some variety of recommendations.  It starts making sense in time.

I hope this helps

Max

Max

Offline Philip de Louraille

  • PixInsight Addict
  • ***
  • Posts: 289
Re: PixInsight 1.5.2: Prerelease Information
« Reply #8 on: 2009 May 27 22:13:07 »
Thanks Max!
Philip de Louraille

Offline Fco. Bosch

  • Member
  • *
  • Posts: 66
Re: PixInsight 1.5.2: Prerelease Information
« Reply #9 on: 2009 May 28 01:05:43 »
Juan wrote
quote]As shown on the figure, our first step is to define a linear, uniform RGB working space (RGBWS). A linear RGBWS defines a gamma value of 1.0[/quote]

That too applies to a DSLR image, or only to CCD? I can't modify my luminance coeficients with RGBWorking space.

Fco Bosch
Fco. Bosch

Offline vicent_peris

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 988
    • http://www.astrofoto.es/
Re: PixInsight 1.5.2: Prerelease Information
« Reply #10 on: 2009 May 28 10:57:45 »
Hi all,

a little hint on this discussion.

Regarding when to apply one technique. We must keep things simple. I would do three sections inside the processing steps of one image.

- In the first section, we must end with an image where all signal is representative of the subject we are going to photograph. To me (or, at least, my work), the act of photographing any kind of subject is a matter of photon counting. Surely most of the photographers will think this as a silly statement. But this is my personal viewing point, and it tells me how to take my camera and how to process one image. This is extremely important to keep in mind during the first section. Here, we will isolate signal of the subject from other signal / noise sources.

This section includes HDR combining and blooming supression, image calibration, background gradients supression, and another kind of artifacts (cosmic rays, etc). These processes must be done completely with linear data, as we want the pure light of the subject.

- A second section of linear transformations to process the signal of the subject, through algorithms that only work with linear data. This includes techniques as deconvolution or color calibration, and wavelets in some cases.

- A third section of non-linear transformations, as dynamic range compression, or my multi-way wavelet techniques. It's extremely important to keep in mind that, once we apply non-linear transformations, it will be very difficult to go back. This is why is so important to divide the processing steps in linear vs. non-linear.


Well... hope this helps!
Regards,
Vicent.
« Last Edit: 2009 May 28 12:33:48 by vicent_peris »

Offline Simon Hicks

  • PixInsight Old Hand
  • ****
  • Posts: 333
Re: PixInsight 1.5.2: Prerelease Information
« Reply #11 on: 2009 May 28 11:19:26 »
Hi Vincent,

Quote
the act of photographing any kind of subject is a matter of photon counting

Assuming the above is applied to astrophotography then I couldn't have put it better myself. Your explaination of the three stages is the best I've seen. It really captures and clarifies everything. I may well print that out and put it above my monitor for constant reference.

Thank you!

Cheers
         Simon

Offline Jordi Gallego

  • PixInsight Addict
  • ***
  • Posts: 279
Re: PixInsight 1.5.2: Prerelease Information
« Reply #12 on: 2009 May 28 12:00:05 »
Not in 10 hours indeed, but tomorrow morning (I live at UTC+2) I'll be glad to upload a working Deconvolution module, which all of you can use with the current 1.5 version :)

Great news Juan  :D :D :D :D :D!!

Regards
Jordi
Jordi Gallego
www.astrophoto.es

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: PixInsight 1.5.2: Prerelease Information
« Reply #13 on: 2009 May 28 13:15:53 »
A bit later than promised but it's here:

Linux 64-bit:
http://pixinsight.com/export/Deconvolution-x11-x86_64-20090528.tar.gz

Windows 32-bit
http://pixinsight.com/export/Deconvolution-win-x86-20090528.zip

Windows 64-bit:
http://pixinsight.com/export/Deconvolution-win-x86_64-20090528.zip

Sorry, Mac OS X and Linux32 coming soon (tomorrow).

To use these modules, just decompress and copy them to your bin installation folder (Windows: C:\PCL\bin or C:\PCL64\bin by default). Of course, you must replace the previous deconvolution module with the new one.

Enjoy! ;)
« Last Edit: 2009 May 28 13:17:49 by Juan Conejero »
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: PixInsight 1.5.2: Prerelease Information
« Reply #14 on: 2009 May 29 00:00:24 »
Hi Francisco,

Sorry, I didn't see your post yesterday.

Quote
Quote
As shown on the figure, our first step is to define a linear, uniform RGB working space (RGBWS). A linear RGBWS defines a gamma value of 1.0

That too applies to a DSLR image, or only to CCD? I can't modify my luminance coeficients with RGBWorking space.
[/quote]

Yes, of course. Everything applies to DSLR images too, since CMOS sensors are also linear.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/