Author Topic: THELI vs PI  (Read 21624 times)

Offline Ignacio

  • PixInsight Old Hand
  • ****
  • Posts: 375
    • PampaSkies
THELI vs PI
« on: 2013 September 25 13:29:08 »
Hi,

I have been curious about THELI lately,  an image processing pipeline used by several professional observatories. It runs in Linux, and although it is a lot less friendly than PI and uses a more intimidating jargon, I was interested in comparing how data is handled vs PI.

One thing that came up right from the start, and that sounded interesting to me, is the notion of Weighting images. There is one for each subexposure, basically the master flat, that carries the information of each pixel's quality (sensitivity), and goes thru the same transformations as the corresponding sub. When stacking, this information is used to maximize SNR at the pixel level.

I tried it on noisy data taken at high f-ratio, and noticed a visible improvement vs PI's calibration.

Thoughts?

cheers
Ignacio
« Last Edit: 2013 September 26 06:11:17 by Ignacio »

Offline Ignacio

  • PixInsight Old Hand
  • ****
  • Posts: 375
    • PampaSkies
Re: THELI vs PI
« Reply #1 on: 2013 September 26 05:45:29 »
Anyone? Juan, have you considered this at any point?

I did a second example, and got about 30% noise reduction, quite significant.

I was thinking that such a weighting matrix/image could also carry bayer color information, so that "debayering" can take place together with stacking.

Ignacio

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: THELI vs PI
« Reply #2 on: 2013 September 26 07:44:09 »
Hi Ignacio,

Do you have a link to a document describing this procedure? Can you upload the data set that you have used in this test? How did you measure noise reduction?
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Ignacio

  • PixInsight Old Hand
  • ****
  • Posts: 375
    • PampaSkies
Re: THELI vs PI
« Reply #3 on: 2013 September 26 08:58:41 »
Hi Juan,

Here are some links:
General Doc: http://www.astro.uni-bonn.de/theli/gui/index.html
Description of Weighting images: http://www.astro.uni-bonn.de/~mischa/datareduction/weighting.html

I have attached an example of a Weighting image for a single subexposure, after being transformed thru astrometric registration. A detailed stretched, zoomed, crop, that shows some vignetting, the shadow of a speck of dust, and the inter-pixel variations of QE (this is for a canon, green channel, not sure how the bayer matrix is handled in this process).

In both cases (PI and THELI) I measure the overall noise of the linear stack using the PI script NoiseEvaluation, after the same color calibration , and adjusting the black point such that both histograms modal value were aligned (with no clipping).

I will upload linear stacked master lights for PI and THELI reduction for comparison, and provide links in this thread, in a little while.

Ignacio





Offline Ignacio

  • PixInsight Old Hand
  • ****
  • Posts: 375
    • PampaSkies
Re: THELI vs PI
« Reply #4 on: 2013 September 26 11:34:39 »
Ok, here are links to both master lights, built from exactly the same data set, from an M83 acquisition done at f/14 (barlowed). I have also attached a comparison table generated by SubframeSelector. I tried to retain the same frame coverage in both final stacks, and also applied the same color calibration procedure to both in PI. Beware that they are about 90MP each.

http://www.pampaskies.com/gallery3/var/other/master_light_PI.fit

http://www.pampaskies.com/gallery3/var/other/master_light_THELI.fit

The workflow for data reduction in PI is the standard one, with pretty lose clipping in building all masters (3.5/5 in calibration masters, and 5/6.5  in final stack). Data is calibrated with biases, darks and flats, in CFA format (forced detection), VNG debayered, and registered with the new SA module(choosing the best frame as reference with SubframeSelector), with distortion correction. Final stack in PI is done with II Average/Additive/Noise Evaluation/Iterative k-sigma, and Linear fit clipping/zero offset and scaling (5/6.5), for outlier rejection.

As you can tell by the comparison table, the THELI data reduction appears to be significantly better, in spite of not having the dark scaling capabilities of PI (darks were fairly matched anyway). I think the key lies in this weighting matrices, combined with significant QE pixel variations in a bayered sensor.

Please, Juan, take a look. There could be something worth adding to PI here.

Ignacio

Offline Andres.Pozo

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 927
Re: THELI vs PI
« Reply #5 on: 2013 September 26 12:48:24 »
The Theli image has less noise because is more blurred than the PI one. i have seen this same effect in images stacked with DSS. PI uses a much better interpolation algorithm when registering the images that blurs less the image.

The attached animation is an scaled crop without interpolation of an area at the center of the image.


Offline marekc

  • PixInsight Addict
  • ***
  • Posts: 177
Re: THELI vs PI
« Reply #6 on: 2013 September 26 12:54:03 »
Thanks for posting that comparison, Andres.

Perhaps I'm being too harsh on THELI - which I have never used, I must admit - but when I first saw this thread, I immediately thought "I'll bet it just blurs the image. Ten bucks says those images look less sharp, with less detail and bloat-ier stars." And that's how it looks to me, based on the images you posted.

Again, maybe I'm being unfair, not having done a real comparison myself, but those images seem, to me, to confirm my first knee-jerk guess.

- Marek

Offline Ignacio

  • PixInsight Old Hand
  • ****
  • Posts: 375
    • PampaSkies
Re: THELI vs PI
« Reply #7 on: 2013 September 26 13:11:00 »
Thanks for the replies and for taking the time.

Andres, you could be right, however the metrics for FWHM are almost identical, and better star support on Theli, so I wonder. Besides, Theli offers several ways of interpolating a registered image. I just tried the default one (lanczos3), and there could be others that blur less. I have to try that, but I see your point.

In any case, I still think that tracking the "sensing quality" of each pixel for each sub makes sense, no pun intended. I always noted the amount of structure at the pixel level of my master flats, which is used to "Normalize" brighness but not to optimize SNR.

Ignacio




Offline pfile

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 4729
Re: THELI vs PI
« Reply #8 on: 2013 September 26 13:20:31 »
something else that's interesting about these images is that the banding noise in the PI image is much worse than the THALI image, in fact, it is almost nonexistent in the THALI image. i suppose this could be related to the softness of the THELI image, but it is strange.



Offline Ignacio

  • PixInsight Old Hand
  • ****
  • Posts: 375
    • PampaSkies
Re: THELI vs PI
« Reply #9 on: 2013 September 26 13:40:04 »
Good point. In both cases I applied canon banding reduction with maximum protection of highlights to the final linear image, but there is an extra step in the theli pipeline, called  background collapse correction, that addresses this same issue but at the subexposure level.

Ignacio


Offline tsaban

  • Member
  • *
  • Posts: 50
Re: THELI vs PI
« Reply #10 on: 2013 September 26 13:55:37 »
Thanks Ignacio for this Information. I had already forgotten about THELI which became popular in Austra a few years ago. I have used THELI briefly before my first try of PixelInsight. I had attended a THELI presentation at CEDIC and wanted to try it. The Version at that time was in my opinion difficult to use. The nomenclature is very different of what amateurs use, so the learning curve was steep for me. In the end I used it for only one galaxy image, which came out clean with weak but natural looking colors.

CS
Tahir
Clear skies!
Tahir Saban

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: THELI vs PI
« Reply #11 on: 2013 September 26 14:01:00 »
Hi Ignacio,

Thank you for uploading these images.

Quote
the THELI data reduction appears to be significantly better

I don't think so. Of the two images that you have uploaded, the image generated by PixInsight is much better IMHO. The other image lacks structures at the one-pixel level; it looks as if a low-pass filter had been applied to it. This is probably the combined result of different (worse, IMO) debayering and interpolation methods. The following screenshot compares both images side to side.


By applying a low-pass filter to the PI image, the "smoothness" of the THELI image can be easily emulated. This is a quick example with wavelets:


Finally, the THELI image has strong aliasing artifacts that are absent in the PI image. These artifacts are the result of a wrong interpolation algorithm, or of a poor implementation. I have marked a couple of these problems on the following screenshot. Once you know where the green circles are located, look at the same regions on the first screenshot and compare with the PI image.


Regarding star shapes, the PI image has more Gaussian star profiles (as expected from the integration of 25 images), while the stars on the other image tend to be better fitted with Moffat 4 and Moffat 6 functions. In this regard there are no significant differences in my opinion, however.

Quote
Theli offers several ways of interpolating a registered image. I just tried the default one (lanczos3), and there could be others that blur less.

If correctly implemented, the results of the Lanczos-3 algorithm should be very close to the best result that can be achieved by interpolation in terms of detail preservation and minimal aliasing. Please note that PixInsight also uses Lanczos interpolation (also Lanczos-3 by default).

Quote
I still think that tracking the "sensing quality" of each pixel for each sub makes sense

I also tend to disagree here. In my opinion, image weights must be based on quality estimates computed for the whole images, not for individual pixels.

Quote
the banding noise in the PI image is much worse than the THALI image

This is true. The only reason that I can figure out is that THELI has applied a banding reduction routine as part of the preprocessing task. This is not the case with PixInsight's standard preprocessing tools. To reduce banding, you have to apply specific tools in PixInsight (e.g. Georg Viehoever's excellent CanonBandingReduction script).
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Ignacio

  • PixInsight Old Hand
  • ****
  • Posts: 375
    • PampaSkies
Re: THELI vs PI
« Reply #12 on: 2013 September 26 14:35:53 »
Thanks for your response, Juan.

I also tend to disagree here. In my opinion, image weights must be based on quality estimates computed for the whole images, not for individual pixels.

Please help me understand why. For example, if you have half a sensor with less QE than the other half, and you do a meridian flip midsession (without rotating the camera), wouldn't you use that information to weight each half frame accordingly, even though the full frame snr estimate could be similar? I know this is an "artificial" example, but it helps make the point. (the same could be argued at the pixel level with dithering). The extreme case of this would be bad pixels or columns, which you want to ignore completely (weight=0).


Quote
the banding noise in the PI image is much worse than the THALI image

Quote
This is true. The only reason that I can figure out is that THELI has applied a banding reduction routine as part of the preprocessing task. This is not the case with PixInsight's standard preprocessing tools. To reduce banding, you have to apply specific tools in PixInsight (e.g. Georg Viehoever's excellent CanonBandingReduction script).

This is exactly the case, as I point out in a previous post, which begs the question: is it better to correct the banding problem at the subexposure level or with the final stack?

Thanks again,
Ignacio

« Last Edit: 2013 September 26 14:42:01 by Ignacio »

Offline pfile

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 4729
Re: THELI vs PI
« Reply #13 on: 2013 September 26 15:30:14 »
IMHO it's better to do the banding reduction at the subexposure level if only because if you have a multinight project and your camera angle is not exactly the same, the banding gets superposed on itself at angles in the integration, and then it's impossible to remove properly.

on one particularly bad set of images i had, i used ImageContainer and CanonBandingReduction to pre-process the subs. despite trying to take care to get the highlight protection right, when i finally got to the integrated image, it was clear that background areas to the left and right of the DSO in the center had been severely darkened. although it was not proper, i re-ran CanonBandingReduction on the integrated image and it repaired the damage.

so, at least for me, for whatever reason, it was difficult to get the banding reduction right at the subexposure level but i still think it's better done there.

i had always thought of that banding issue as purely a canon problem but if THELI has this as part of the pipeling, other sensors must suffer from this problem.

rob

Offline georg.viehoever

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2132
Re: THELI vs PI
« Reply #14 on: 2013 September 26 15:48:39 »
...
Quote
I still think that tracking the "sensing quality" of each pixel for each sub makes sense

I also tend to disagree here. In my opinion, image weights must be based on quality estimates computed for the whole images, not for individual pixels.
...
I played with this idea of per pixel sensitivity and noise models a bit a while ago. At least in my Canon EOS 40D pictures it is obvious that different pixels have different noise properties, it is not just poisson noise that can be seen, and in darks it does not grow as expected with exposure time. But to get reliable statistics that would help in determining the per pixel weights (and scaling/offsets/non-linearity/... factors), IMHO we would need many more than 25 images, maybe 200. If you have only 25 images, it is probably better to estimate global noise properties using all pixels, just as ImageIntegration currently does.

It would probably be possible to do a per pixel model  by taking lots of flats/darks with different illuminations/exposure times, and then compute a sensitivity and noise model for each pixel for this (from the library of flats, not from the lights). If using these statistics for a better weighting and correction on a per pixel basis would actually produce better images, that I don't know. I never found the time to fully explore the idea.

Georg
Georg (6 inch Newton, unmodified Canon EOS40D+80D, unguided EQ5 mount)