Author Topic: PI Workflow question  (Read 13181 times)

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: PI Workflow question
« Reply #15 on: 2009 June 16 08:04:59 »
Hi Simon,

ACDNR does not work with linear images. The same is true for GREYCstoration.

The new noise reduction algorithm that I implemented in ATrousWaveletTransform does work very well on linear data.

As a general methodology, I don't like the concept of image processing workflow too much. Each image poses different problems. What is important is to know what the different tools and techniques do, why we apply them, and how must be the data in order to apply them. The answers to these questions are usually quite complex and depend too much on critical characteristics of the data as to fit well in the space of fixed workflows.

Having said that, IMO there are some general guidelines:

- Image calibration must be very accurate. In particular, flat fielding is very important. If flat fielding is poor and the image has additive illumination variations (almost all images have them today), then there is no way to correct for multiplicative variations.

- Always work with at least 32-bit floating point data.

- When necessary, background correction (DBE, ABE) must be applied first. This is because if the illumination isn't constant (flat) throughout the whole image, no (useful) statistical model can describe the image. For example, in presence of light pollution gradients, an image cannot be stretched because its histogram doesn't describe the image in terms of background/objects.

- In the case of color images, I'd neutralize the background (BackgroundNeutralization), even after background correction. If the background isn't neutral, no color calibration strategy will work easily. The reason is that in order to calibrate color following some solid criteria, the contributions of the background for each color channel should be the same (neutral). This is necessary because the limits between the background (=noise) and the objects (=signal) are uncertain by definition. If your image has no free background areas (or almost), then you're in trouble. :)

- While the image is still linear, I'd decide whether to apply deconvolution or wavelet-based edge enhancement, or maybe none of these. Deconvolution should only be applied when the signal to noise ratio is very high; otherwise it is a tremendous waste of time and a quick path to destroy the data. Wavelet noise reduction can be applied here, perhaps at the same time as we enhance small-scale structures. Be prepared to build masks in this phase, especially star masks to protect stars and other bright objects.

- A nonlinear stretch should come here. This must be implemented with HistogramTransformation. HT is a high-precision tool that allows you to implement an aggressive stretch in a single operation. Please try to avoid a cascade of histogram transformations ad infinitum, which may be necessary with other applications that offer poor implementations, but not with PixInsight.

- Once the image is nonlinear, consider HDRWaveletTransform. This is the hardcore part.

- Noise reduction comes here. ACDNR is a purely isotropic noise reduction algorithm. Isotropy is a very important property of all algorithms applied to process astronomical images, especially deep-sky images.

- Color saturation, along with other curves if necessary, should be applied after noise reduction. Otherwise you'll boost chrominance noise to a "beyond repair" point. Masks are your best friends here.

- Additional noise reduction may be necessary here. You may consider GREYCstoration as a final "polisher", but be careful because it is an anisotropic algorithm. GREYCstoration is IMO the best general-purpose noise reduction algorithm, as good as (and in most cases far better than) its commercial --and pricey-- competitors. I often laugh when I see people using some of these inferior solutions as plugins whose cost is comparable to a PixInsight license :)

This is a simplified sequence, and reflects only my opinions, which may be wrong / partial / inaccurate / inappropriate / ... / for any particular case.

Finally, this is a linear processing sequence. There are much more sophisticated strategies based on multiscale processing, which basically consists of separating image structures at different dimensional scales, with the help of wavelets, and processing them separately. This is Vicent Peris' specialty :) See an old but excellent example here.

Hope this adds more help than confusion.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Simon Hicks

  • PixInsight Old Hand
  • ****
  • Posts: 333
Re: PI Workflow question
« Reply #16 on: 2009 June 16 08:30:46 »
Hi Juan,

Wow, that's a very complete answer and a few revelations / order changes for me. One thing I don't understand...why is it OK to do wavelet based smoothing whilst linear, but not ACDNR?

Cheers
         Simon

Offline Jack Harvey

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 975
    • PegasusAstronomy.com & Starshadows.com
Re: PI Workflow question
« Reply #17 on: 2009 June 16 09:17:06 »
I may be in the minority, but I do not like to do any noise reduction until the very end.  I think it is bad practice to throw out data (even noise<G>) too early with the exception of hot and cold pixels???
Jack Harvey, PTeam Member
Team Leader, SSRO/PROMPT Imaging Team, CTIO

Offline lucchett

  • PixInsight Old Hand
  • ****
  • Posts: 449
Re: PI Workflow question
« Reply #18 on: 2009 June 16 09:34:47 »
I may be in the minority, but I do not like to do any noise reduction until the very end.  I think it is bad practice to throw out data (even noise<G>) too early with the exception of hot and cold pixels???

I am on the same league on this.
the degree of noise I want in the final image it really depends on the results I achieve at the end of the process.

For the same reason I prefere not to set 100% final (but very close) the shadow clipping point until the end of the "stretching phase".

with my mix of limited skills, time availability,  and set up this usually gives me better results.

Andrea

Offline Stephane Murphy

  • Member
  • *
  • Posts: 63
Re: PI Workflow question
« Reply #19 on: 2009 June 16 10:27:37 »
Hi Juan, first thanks for sharing your though on "imaging workflow".

I  want to understand one of your statement: "Always work with at least 32-bit floating point data"

I am still using CCDStack for the initial calibration and master creation. I found that saving the master in 32 bits format cause a lot of problem due to the way CCDStack compress the information, evend if you save as 32 bits the data get compress in a 16 bit range.

So, when the files are open by PI we are prompted to re-scale the dat between 0..1. So in CCDStack I decided to save in unsigned 16 bits

As long the below formula stay true:

For example for an ST10
range = 50 (frames) * 77k (fullwell) = 3.85 mil
noise = sqrt(50 * 9^2) = 63.6

dynamic range = 3.85m/63.6 = 60.5k

That range is within 16 bits


Can you confirm my below statement?

Thank you
Stephane
Stephane Murphy
CDK 12.5 Planewave Instrument
Paramount ME
SBIG STL11000M
SBIG ST-402 Guider
Astrodon MOAG
Astrodon Filters

Offline vicent_peris

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 988
    • http://www.astrofoto.es/
Re: PI Workflow question
« Reply #20 on: 2009 June 16 11:25:31 »
Hi Juan, first thanks for sharing your though on "imaging workflow".

I  want to understand one of your statement: "Always work with at least 32-bit floating point data"

I am still using CCDStack for the initial calibration and master creation. I found that saving the master in 32 bits format cause a lot of problem due to the way CCDStack compress the information, evend if you save as 32 bits the data get compress in a 16 bit range.

So, when the files are open by PI we are prompted to re-scale the dat between 0..1. So in CCDStack I decided to save in unsigned 16 bits

As long the below formula stay true:

For example for an ST10
range = 50 (frames) * 77k (fullwell) = 3.85 mil
noise = sqrt(50 * 9^2) = 63.6

dynamic range = 3.85m/63.6 = 60.5k

That range is within 16 bits


Can you confirm my below statement?

Thank you
Stephane


Hi,

I will reply better this night... for the moment, Stephane, consider you make a midtones adjustment of 0,25 to your data. This adjustment will compress your pixel values from 0.25 to 1 in the range from 0.5 to 1. With this minimal adjustment you need more than 65k numerical precision; exactly 91k pixel values.

Imagine now that you have to make a midtones adjustment of 0.001.


Regards,
Vicent.

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: PI Workflow question
« Reply #21 on: 2009 June 16 11:31:50 »
Hi Stephane,

I refer to post-processing. You've put an example where 16 bits are just enough to store the data acquired in 50 frames with your ST10 camera. That's perfect to store the integrated raw data, but as soon as you process them, there will be rounding errors that will cause degradation if you continue working with 16 bits.

The severity of rounding errors is difficult to predict. It depends on the effective dynamic range of the data (your signal/noise calculations) and on the number and complexity of the applied processes. The more dynamic range you have, the more data will be lost (in relative terms) by rounding.

The IEEE 32-bit floating point format provides room and numerical accuracy for 106 to 107 discrete values, which is about 20 times the range of 16-bit integers; more than sufficient in most cases. The 32-bit unsigned integer format, also available in PixInsight, provides 232 discrete values, but has the disadvantage that most processes have to work with 64-bit floating point working images all the time, in order to guarantee that the full 32-bit range is always preserved. This is done automatically (and transparently) by all processing modules and the scripting runtime in PixInsight, but duplicates or quadruplicates memory consumption. Finally, PixInsight fully supports 64-bit floating point images, where the available range is about 1015 to 1016 discrete values. 64-bit images are only necessary to work with huge high dynamic range compositions (something like the sun and the interior of a closed box in the same frame), and are also necessary internally as working images, as I've explained.

If you integrate your images with PixInsight, ImageIntegration generates 32-bit floating point pixel data (64-bit as an option) and uses 64-bit floating point for all intermediate calculations, so you'll have no problems.

As for how other applications store images in different formats, I'll restrict my comments to the problems we find when those images are imported in PixInsight. Regarding CCDStack, I recommend to write calibrated images as 16-bit unsigned FITS files. Other formats cause problems and usually require rescaling.

While I'm writing this, I see Vicent has also answered your question. He has put a good practical example.

Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Stephane Murphy

  • Member
  • *
  • Posts: 63
Re: PI Workflow question
« Reply #22 on: 2009 June 16 12:18:07 »
Thanks Vincent, forgot about the post-processing impact :) Of course it make sense.
Thanks
Stephane
Stephane Murphy
CDK 12.5 Planewave Instrument
Paramount ME
SBIG STL11000M
SBIG ST-402 Guider
Astrodon MOAG
Astrodon Filters

Offline Stephane Murphy

  • Member
  • *
  • Posts: 63
Re: PI Workflow question
« Reply #23 on: 2009 June 16 12:20:56 »
As usual, great explanation, thanks.

So if I save unsigned 16 bits in CCDStack, open the file in PI and re-save it as 32 bits, that will work for further processing without loosing information, correct?

My goal is to move to PI for my calibration process, just did not have the time to learn this part yet.


Thanks
Stephane Murphy
CDK 12.5 Planewave Instrument
Paramount ME
SBIG STL11000M
SBIG ST-402 Guider
Astrodon MOAG
Astrodon Filters

Offline Simon Hicks

  • PixInsight Old Hand
  • ****
  • Posts: 333
Re: PI Workflow question
« Reply #24 on: 2009 June 19 07:46:15 »
Hi all,

I'm going to stick my neck out here because I seem to be in a minority....and you guys produce much better pictures than me! But there's something I don't understand and maybe I can learn.

The issue is related to my earlier question about when to use smoothing (ACDNR) in the process. I assumed it was best applied after DBE and colour correction, but still while the image is linear.

But lots of people came back and replied that they leave it till much later in the process, i.e. after Histogram Stretch.

My 'reasoning' for doing it during the linear stage is for the following;

1. Some of the other processes you do whilst linear (like wavelets, deconvolution, etc) seem to work best in high S/N images. If you apply them to areas with lower S/N then they start to create artifacts....sharpening up the noise is not good :-( .  However, applying a bit of subtle smoothing to the low S/N parts (employing masks and the like) whilst maintaining good edge definition (which ACDNR is good at doing) then allows you to apply deconvolution and/or wavelets to the linear images in a better way.

2. Leaving the smoothing till after the HistogramStretch just means that you are amplifying the noise relative to the signal (the HistogramStretch amplifies the low intensity parts relative to the high intensity parts of the image). So you then need to apply much more aggressive smoothing to the image.

3. I've noticed that my sky background looks much more natural if the neutralization and colour correction is done after it has had some smoothing applied. I think this is because the chromatic noise is quite large in the background.

What's wrong with the above reasoning?

Cheers
         Simon






Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: PI Workflow question
« Reply #25 on: 2009 June 20 08:31:03 »
Hi Simon,

Quote
applying a bit of subtle smoothing to the low S/N parts (employing masks and the like) whilst maintaining good edge definition (which ACDNR is good at doing) then allows you to apply deconvolution and/or wavelets to the linear images in a better way.

You have much better resources to deal with this problem in PixInsight:

- Regularized deconvolution algorithms have been designed to restrict image restoration to significant structures. These algorithms apply wavelet-based noise reduction techniques in tandem with deconvolution, and are extremely efficient, although it's true that fine tuning regularization parameters requires practice and experience (and I'll try to made a video tutorial on this topic as soon as I can).

- The latest versions of ATrousWaveletTransform, available from PixInsight 1.5, also implement efficient noise reduction algorithms, which you can use along with edge enhancement (positive biases) applied at specific dimensional scales.

Both solutions, used along with the new deringing algorithms (also from PI 1.5), offer IMHO the best tools currently available to deal with the SNR problem in image restoration. Of course, the limits between high-SNR and low-SNR data are uncertain by definition (by the definition of noise), so masks also play a fundamental role to modulate restoration and edge-enhancement processes.

ACDNR doesn't work well with linear images. This algorithm uses brightness variations (gradients) to adapt itself to the characteristics of the image, in order to protect significant structures from excessive smoothing --this is what we call edge protection. In linear images, these variations are in general too weak as to drive ACDNR's edge protection correctly, and the algorithm tends to destroy significant data.

Quote
Leaving the smoothing till after the HistogramStretch just means that you are amplifying the noise relative to the signal (the HistogramStretch amplifies the low intensity parts relative to the high intensity parts of the image). So you then need to apply much more aggressive smoothing to the image.

That's true but if histogram transformations are applied judiciously then the increase in local contrast between noise structures shouldn't be a problem, in general.

However from PI 1.5, as I've said, you have the new noise reduction in ATW which works superbly for linear (and nonlinear) images.

Quote
I've noticed that my sky background looks much more natural if the neutralization and colour correction is done after it has had some smoothing applied. I think this is because the chromatic noise is quite large in the background.

This is interesting. Could you put an example of this, so that we can use it as a test case to improve the color calibration tools?
Juan Conejero
PixInsight Development Team
http://pixinsight.com/