Author Topic: Deconv deringing wrecks images ?  (Read 28603 times)

Offline Nocturnal

  • PixInsight Jedi Council Member
  • *******
  • Posts: 2727
    • View Profile
    • http://www.carpephoton.com
Deconv deringing wrecks images ?
« on: 2011 January 09 12:31:59 »

Hi,

I've never been able to get good deconvolution results so I figured I'd give it another try. The results are atrocious again. I'm working on a linear image that had DBE applied. My goal is to sharpen things up a bit before stretching. At least I seem to remember deconv should be applied to linear images. The help file doesn't say :) I know people use CCDStack to deconv before proceeding to PS but maybe they do DDP before, I'm not sure.

In any case, with just two iterations of R-L I get a very slight sharpening effect but the stars get ringed. Now I know I could build a star mask but this is what deringing is for, right? Enabling deringing really does a number on the image. Clearly I'm not understanding how this should be done. If someone could lead the way that would be great.

I can't upload screenshots of sufficient size to the forum so take a look in this picasa album:

http://picasaweb.google.com/sander.pool/PIDeconvDeringing#
Best,

    Sander
---
Edge HD 1100
QHY-8 for imaging, IMG0H mono for guiding, video cameras for occulations
ASI224, QHY5L-IIc
HyperStar3
WO-M110ED+FR-III/TRF-2008
Takahashi EM-400
PIxInsight, DeepSkyStacker, PHD, Nebulosity

Offline Philip de Louraille

  • PixInsight Addict
  • ***
  • Posts: 289
    • View Profile
Re: Deconv deringing wrecks images ?
« Reply #1 on: 2011 January 09 13:10:07 »
I have to admit I get the same type of results whenever I try to decon starry fields. The process works a lot better on the moon and planets. "Deringing" seems to do the exact opposite as what its name means. Noise/defects seem to be magnified by selecting deringing just like the original quantum fluctuations prior to Inflation!
Philip de Louraille

Offline vicent_peris

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 987
    • View Profile
    • http://www.astrofoto.es/
Re: Deconv deringing wrecks images ?
« Reply #2 on: 2011 January 09 15:05:47 »
Hi,

That happens because the amount of deringing is too high. Usually in linear images the amount must be very low. Try to expand the the deringing section and start with the lowest value you can. Then start to raise the value until ringing start to disappear.

Take into account that you will need anyway a starmask for the brighter stars.


Regards,
Vicent.

Offline Nocturnal

  • PixInsight Jedi Council Member
  • *******
  • Posts: 2727
    • View Profile
    • http://www.carpephoton.com
Re: Deconv deringing wrecks images ?
« Reply #3 on: 2011 January 09 15:09:37 »
I played with the deringing settings and the effect seemed pretty terrible in all cases. I will try to reduce it and see what happens.
Best,

    Sander
---
Edge HD 1100
QHY-8 for imaging, IMG0H mono for guiding, video cameras for occulations
ASI224, QHY5L-IIc
HyperStar3
WO-M110ED+FR-III/TRF-2008
Takahashi EM-400
PIxInsight, DeepSkyStacker, PHD, Nebulosity

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 6869
    • View Profile
    • http://pixinsight.com/
Re: Deconv deringing wrecks images ?
« Reply #4 on: 2011 January 11 02:59:11 »
Hi Sander,

If you can upload the image or a significant portion of it, I'll be glad to give it a try. It's all a matter of tuning parameters; the deringing routines work very well but require some practice.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Carlos Milovic

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2173
  • Join the dark side... we have cookies
    • View Profile
    • http://www.astrophoto.cl
Re: Deconv deringing wrecks images ?
« Reply #5 on: 2011 January 11 12:28:58 »
Yeah. The deringing parameters are tricky. Also, I found that they may vary from smaller previews to the real image, so I just work with full size previews.
Regards,

Carlos Milovic F.
--------------------------------
PixInsight Project Developer
http://www.pixinsight.com

Offline Nocturnal

  • PixInsight Jedi Council Member
  • *******
  • Posts: 2727
    • View Profile
    • http://www.carpephoton.com
Re: Deconv deringing wrecks images ?
« Reply #6 on: 2011 January 13 18:29:31 »
Hi,

here's a crop of my image in 32b TIFF:

http://dl.dropbox.com/u/18664037/NGC2264-HS-dbe-crop.tif

It had a DBE applied but is still linear. I suppose I should have done a CC on it first but that shouldn't affect the deconvolution process.

Thanks for any additional pointers you can provide.
Best,

    Sander
---
Edge HD 1100
QHY-8 for imaging, IMG0H mono for guiding, video cameras for occulations
ASI224, QHY5L-IIc
HyperStar3
WO-M110ED+FR-III/TRF-2008
Takahashi EM-400
PIxInsight, DeepSkyStacker, PHD, Nebulosity

Offline RBA

  • PixInsight Guru
  • ****
  • Posts: 507
    • View Profile
    • DeepSkyColors
Re: Deconv deringing wrecks images ?
« Reply #7 on: 2011 January 14 01:56:01 »
I know people use CCDStack to deconv before proceeding to PS but maybe they do DDP before, I'm not sure.

Most folks who do deconvolution with CCDStack do so *after* some DDP. Go tell them that's the wrong thing to do...

Quote
In any case, with just two iterations of R-L I get a very slight sharpening effect but the stars get ringed. Now I know I could build a star mask but this is what deringing is for, right? Enabling deringing really does a number on the image. Clearly I'm not understanding how this should be done. If someone could lead the way that would be great.

With or without deringing, I believe using a mask is still granted - just not a star mask unless you don't want to deconvolve your stars as well. Deconvolution needs to be applied stronger in the areas that are high in SNR, and less (or nothing at all) in areas with a low SNR... Similar, though not identical to the inverse problem when applying noise reduction: strong in low SNR areas, and light or nothing in high SNR areas. So just like when using noise a reduction tool, what I'd do is build a mask that allows me to do just that, well, the inverse in this case: protect areas with a low SNR... Then I apply the deconvolution over to that, and the mask takes care of applying a "gradual deconvolution", stronger as the SNR improves (IMHO a much more efficient, fast, and accurate solution than the "multi-strength decon" technique used by other folks).


Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 6869
    • View Profile
    • http://pixinsight.com/
Re: Deconv deringing wrecks images ?
« Reply #8 on: 2011 January 14 03:49:36 »
Hi Sander,

I've made a quick deconvolution try with your cropped image, with good results.

The first step is defining a linear RGB working space. This is mandatory to deconvolve the luminance of a linear RGB image. The next screenshot shows the relevant parameters in the RGBWorkingSpace tool.


As you see I have also defined an uniform RGB working space (same luminance weights for red, green and blue). This won't confer more relevance to any specific color to compute the luminance. As the image has a variety of emission and reflexion nebulae, this choice seems reasonable.

The second step is building a star mask, which we'll use later as a deringing support to protect the brightest stars during deconvolution. Note that we only want to protect just the brightest stars; Deconvolution's global deringing is very efficient to protect the rest of small-scale, high-contrast objects.

In this case, I've just applied the StarMask tool with default parameters, as shown in the next screenshot.


Now let's start deconvolving the image. As always, the ScreenTransferFunction tool must be used to make the linear image visible without altering it. I've just applied the Auto Stretch feature. This is a deconvolved preview shown at 3:1 magnification:


Your image does not have very high SNR, so the benefits of deconvolution cannot be spectacular. However, the achieved star reduction and the nice sharpening effect on the nebulas, especially on dark structures, are well worth the effort in my opinion.

The standard deviation of the Gaussian PSF is 1.1 pixels. I have derived this value from measuring the smallest stars in your image, and after some experimentation. I think it is very close to the true PSF in your image (I haven't measured it; this is a quick test).

Now let's see what happens if we disable some features of the deconvolution algorithm. This is the best way to understand how they work. First we'll disable wavelet noise regularization:


Pretty ugly, isn't it? Regularization works by applying a wavelet-based noise analysis and reduction routine in tandem with deconvolution, at each iteration. It is very efficient, as you can see by comparing the previous two screenshots. In fact, our implementation of wavelet regularized deconvolution easily outperforms other implementations, including the classical Richardson-Lucy and Van Cittert algorithms, as well as Maximum Entropy. The original regularization algorithms have been created by Jean-Luc Starck and Fionn Murtagh (http://www.multiresolution.com/cupbook.html). I have slightly modified and adapted them to our working tools. Our implementation has the advantage that you can fine tune regularization parameters very easily.

Let's continue with our 'disabling things tour'. Now we'll re-enable regularization but disable deringing:


Actually not less ugly than before :) This is the Gibbs phenomenon in all its splendor!

We have two deringing algorithms implemented in our Deconvolution tool: local and global deringing. Global deringing, as its name suggests, is applied equally to the whole image. This is an incredibly efficient algorithm created by Vicent Peris; all of us should be grateful to him for having devised such a nice thing.

In many cases global deringing is sufficient. However, when there are bright stars and other large and very bright features, we often need an increased level of protection. This happens because large stars cannot be deconvolved in the same way as the rest of the image. We address this problem with the local deringing algorithm. Local deringing is driven by a special image (usually a star mask) that is used to change the way deconvolution works at each iteration: the local deringing support. Local deringing support images are very easy to build: it usually suffices with a relatively rough star mask covering the brightest objects.

With the combination of local and global deringing, we can manage virtually any ringing problem in deconvolution.

To better illustrate how both deringing algorithms work, we'll see what happens if we disable local deringing in this case:


Note the ringing problems that persist around the brightest stars.

Finally, let's re-enable everyting. This is the result of Deconvolution with regularization and local+global deringing for the same preview:


And for further comparison, this is the same preview from the original image, before deconvolution:


I think that even if your image doesn't have a very high SNR, deconvolution with the appropriate control features provides a very nice result.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Ginge

  • PixInsight Addict
  • ***
  • Posts: 215
    • View Profile
Re: Deconv deringing wrecks images ?
« Reply #9 on: 2011 January 14 04:13:32 »
Hi again!
I tried t oask this question in another thread but never got an answer so as this thread has touched the same topic I try again:
Why exactly is the use of deconvolution on non-linear material not recommended?

Ginge

Offline Simon Hicks

  • PixInsight Old Hand
  • ****
  • Posts: 333
    • View Profile
Re: Deconv deringing wrecks images ?
« Reply #10 on: 2011 January 14 04:20:22 »
I really don't understand what an RGB working space is and why its needed....Is this different to a normal RGB image.....can someone explain?

If I load in an Autosave.tif 32bit colour image from DSS is that already in an RGB working space, or do I need to go through the above process to put it into one?

And if its "mandatory" for the deconvolution process to work in an RGB working space, why doesn't the Deconvolution process just put the image into this space, do its stuff, and then convert back behind the scenes?

Or have I got the wrong end of the stick?

Offline vicent_peris

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 987
    • View Profile
    • http://www.astrofoto.es/
Re: Deconv deringing wrecks images ?
« Reply #11 on: 2011 January 14 04:28:47 »
Hi again!
I tried t oask this question in another thread but never got an answer so as this thread has touched the same topic I try again:
Why exactly is the use of deconvolution on non-linear material not recommended?

Ginge

It is not recommended because deconvolution uses a model of the PSF of your image to convert it in a perfect, unblurred dot. In a linear image, the PSF doesn't vary with object brightness (roughly... In the real world, the dimmer stars are tighter because the outer areas of the Gaussian bell are lost in the noise). 

But in a stretched image, the PSF varies extremely. Think on two stars: the first very bright and the second very dimm. You cannot say to the algorithm the correct PSF to deconvolve, because is changed extremely, depending on the brightness of the object. 

On the practice, although is much less rigorous, you can apply a deconvolution to a stretched image, but you will get better result with a linear image. 

Regards,
Vicent

Offline Nocturnal

  • PixInsight Jedi Council Member
  • *******
  • Posts: 2727
    • View Profile
    • http://www.carpephoton.com
Re: Deconv deringing wrecks images ?
« Reply #12 on: 2011 January 14 06:45:38 »
Thanks Juan, I will follow the steps for myself and see if I can recreate the effect.

As far as the SNR goes, this image is pretty typical for what amateurs can achieve with a budget of only a few thousand dollars. In fact it's probably lower noise than most 'low effort' images. This is only a few hours of exposure but with a C11 at F/2 so it brings in quite a bit of light. We can't all use professional instruments and we can't all spend several nights on an image.
Best,

    Sander
---
Edge HD 1100
QHY-8 for imaging, IMG0H mono for guiding, video cameras for occulations
ASI224, QHY5L-IIc
HyperStar3
WO-M110ED+FR-III/TRF-2008
Takahashi EM-400
PIxInsight, DeepSkyStacker, PHD, Nebulosity

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 6869
    • View Profile
    • http://pixinsight.com/
Re: Deconv deringing wrecks images ?
« Reply #13 on: 2011 January 14 07:31:58 »
Hi Simon,

Quote
And if its "mandatory" for the deconvolution process to work in an RGB working space, why doesn't the Deconvolution process just put the image into this space, do its stuff, and then convert back behind the scenes?

Because the process cannot know which RGB space your image is referred to.

An RGB working space (RGBWS) is not a process; it is just a declaration that informs the whole platform about the true meaning of pixel values in the image, in the context of luminance/chrominance separations. It is the entire responsibility of the user to specify the correct RGBWS for each image.

To better understand how an RGBWS works, we should review the whole concept first. An RGBWS is composed of the following elements:

- A vector of luminance coefficients: Y = {YR, YG, YB}. The components of Y, or luminance coefficients, work as weights to tell PixInsight how much of each color must be taken from a color pixel to compute its luminance.

- Two vectors of chromaticity coordinates: x = {xR, xG, xB} and y = {yR, yG, yB}. These are the coordinates of the red, green and blue primaries on the CIE chromaticity diagram. In simple words, these coordinates define the colorants of the color space.

- A reference white. The RGB working spaces are always relative to the standard D50 illuminant in PixInsight (when a color space is not natively referred to D50, as happens with sRGB, its components are transformed with Bradford's chromatic adaptation algorithm).

- Gamma. This is an exponent to which each individual RGB component must be raised in order to linearize it. In other words, the gamma of an RGBWS allows PixInsight to compute linear RGB components (in theory) for all images whose pixels are referred to the RGBWS in question. Obviously, gamma=1 for a linear image.

For most practical image processing purposes, the colorants of the RGBWS are not relevant (for example, they don't affect luminance/chrominance separations). So in practice we have only two relevant items: luminance coefficients and gamma.

Luminance coefficients can be varied with the purpose of maximizing information representation in the luminance. We usually set all coefficients equal to signify that no color has more relevance in terms of information contents. For some images, we can confer more relevance to red and/or blue than to green. For example, an image that is strongly dominated by emission nebulosity can benefit from a higher red luminance coefficient.

The gamma must be set to characterize the non-linearity of the RGB components. When an image is linear, we must set gamma=1, or otherwise we'd be cheating and all luminance computations would be incorrect. For nonlinear images, the default is the sRGB gamma function (a piecewise function approximately equal to gamma=2.2). In theory, the luminance of a nonlinear image could be deconvolved as linear, if we could characterize its non-linearity as a simple gamma function, provided that no saturation has happened. However, this is not usually the case in the real world —for example, how could we characterize the nonlinearity of an image after several curves, HDRWT and color saturation transformations?

Quote
If I load in an Autosave.tif 32bit colour image from DSS is that already in an RGB working space, or do I need to go through the above process to put it into one?

The Autosave.tif image is a linear RGB color image. So the first thing you have to do, if you want to process the image while it is still linear, is telling PixInsight that the image is linear. You do so by setting gamma=1 in RGBWorkingSpace and applying the process to the image.

Later, after applying the initial nonlinear stretch (with HistogramTransformation for example), you should return to a nonlinear RGBWS (gamma > 1, normally). In theory you should use the value of gamma that better represents the nonlinear transformations that you have applied. In practice, however, unless you must follow strict colorimetric criteria, the exact value of gamma is not really critical so you can continue working in the default sRGB RGBWS for example. This is because when the image is nonlinear, we work with luminosity instead of luminance, and luminosity (or the CIE L* component) has a perceptual meaning. Remember that RGBWS are not related to color management in PixInsight; color spaces for color management and RGBWS are separate entities and pursue different goals.

Anyway, the whole concept of RGBWS in PixInsight is currently subject to revision. There may be some changes and/or new related tools in the next months, which should be oriented to make things easier to the user and the whole platform more robust and rigorous at the same time.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Simon Hicks

  • PixInsight Old Hand
  • ****
  • Posts: 333
    • View Profile
Re: Deconv deringing wrecks images ?
« Reply #14 on: 2011 January 14 07:59:27 »
Wow...Thanks Juan...that's a lot of information.  8)

Quote
The Autosave.tif image is a linear RGB color image. So the first thing you have to do, if you want to process the image while it is still linear, is telling PixInsight that the image is linear. You do so by setting gamma=1 in RGBWorkingSpace and applying the process to the image.

Later, after applying the initial nonlinear stretch (with HistogramTransformation for example), you should return to a nonlinear RGBWS (gamma > 1, normally).

I must admit that I have never done this (am I the only one???). I will try it tonight to see what effect it has. I assume I will see no immediate difference in the image but that it will make processes applied during the linear stage (mainly deconvolution and colour balancing?) work better?