Author Topic: THELI vs PI  (Read 21622 times)

Offline Ignacio

  • PixInsight Old Hand
  • ****
  • Posts: 375
    • PampaSkies
Re: THELI vs PI
« Reply #15 on: 2013 September 26 16:03:01 »
...
Quote
I still think that tracking the "sensing quality" of each pixel for each sub makes sense

I also tend to disagree here. In my opinion, image weights must be based on quality estimates computed for the whole images, not for individual pixels.
...
I played with this idea of per pixel sensitivity and noise models a bit a while ago. At least in my Canon EOS 40D pictures it is obvious that different pixels have different noise properties, it is not just poisson noise that can be seen, and in darks it does not grow as expected with exposure time. But to get reliable statistics that would help in determining the per pixel weights (and scaling/offsets/non-linearity/... factors), IMHO we would need many more than 25 images, maybe 200. If you have only 25 images, it is probably better to estimate global noise properties using all pixels, just as ImageIntegration currently does.

It would probably be possible to do a per pixel model  by taking lots of flats/darks with different illuminations/exposure times, and then compute a sensitivity and noise model for each pixel for this (from the library of flats, not from the lights). If using these statistics for a better weighting and correction on a per pixel basis would actually produce better images, that I don't know. I never found the time to fully explore the idea.

Georg

Thanks for the contribution, Georg, although I am not sure I follow you.  :smiley: The weighting image/matrix Theli uses, based on the master flat, is just a per-pixel sensitivity reference used to weight each pixel during stacking. It doesn't try to be a full characterization of the noise statistics of each pixel. It is similar to what you do when combining subexposures of different duration, but at a pixel level. It is not that complicated.

cheers
Ignacio

Offline Carlos Milovic

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2172
  • Join the dark side... we have cookies
    • http://www.astrophoto.cl
Re: THELI vs PI
« Reply #16 on: 2013 September 26 18:54:53 »
I suspect that if the rejection algorithm is working well, and we have a lot of images, then the weighting part do not play a crucial role... I may be wrong, but differences in a few pixels (of distance) should not be larg enough to make a difference. Of course this should be different in presence of vignetting, or large dust particles near the sensor, but in those cases, rejection might be enought to discard them.
Regards,

Carlos Milovic F.
--------------------------------
PixInsight Project Developer
http://www.pixinsight.com

Offline Ignacio

  • PixInsight Old Hand
  • ****
  • Posts: 375
    • PampaSkies
Re: THELI vs PI
« Reply #17 on: 2013 September 26 20:04:15 »
Hi Carlos,

Of course that pixel rejection will take care of variations to some degree , but mostly in the more extreme cases (unless your clipping is aggressive which brings other noise problems). But having the apriori knowledge of how pixels respond, it seems a waste not to use that information and let rejection deal with the random unpredictable events only.

Vignetting is not the most interesting problem to be addressed, since it stays more or less in the same region of the object regardless of dithering. But small scale sensitivity variations, such as pixel QE variations, or specks of size comparable to dithering amplitude, are good candidates for better calibration (similar to rejection). In fact, dithering is recommended to deal, not only with hot/cold residual pixels after bias/dark substraction, via rejection, but also to average out these pixel sensitivity variations (very important with dslr).

Here is an example of a cropped, compressed, center region of a master flat, that shows pixel scale sensitivity variations. Some of it is just residual noise from stacking "only" 30 flat frames, but much of it is real pixel sensitivity variations. The center region is very evenly illuminated, still the range of pixel intensity goes from 0.53 to 0.97.

Maybe the improvement is not worth the effort, but there should be some level of improvement.

cheers
Ignacio



Offline oldwexi

  • PixInsight Guru
  • ****
  • Posts: 627
    • Astronomy Pages G.W.
Re: THELI vs PI
« Reply #18 on: 2013 September 27 01:20:48 »
Hi Ignacio!
Before stacking the images they have to be registered.
They become moved, turned stressed on pixel- and subpixel level.
So after registering the flat Pixel xy (1,1) it is not on Position 1,1 anymore

How is this done to use the masterflat as as reference for
processing a registered, turned, moved, and somehow else changed image?

How can you be sure about the 100 linearity of your chip?
A 5 second flat Pixel in my opinion behaves not the
same way as a 10 minute light Image Pixel.

So far my understanding is: this flatmaster thing sounds nice for beginners
but if you get deeper into it i personally see it as a Marketing Gimmick...

I would be happy to understand this better

Gerald.


Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: THELI vs PI
« Reply #19 on: 2013 September 27 01:45:04 »
Hi Ignacio,

Let's see if I understand the weighting method that you are proposing:

- The 'weighting matrix' is just the normalized master flat, that is, the master flat divided by its mean value (or a linear function of it).

- The weighting matrix is applied by multiplication to each light frame, just after calibration and before debayering (in case of CFA images) and registration.

I must be missing or not understanding something important, since this procedure is the inverse of a flat fielding operation. During the calibration phase, we already have divided each light frame (after overscan correction, master bias subtraction and optimized master dark subtraction) by the normalized master flat frame.  This doesn't make any sense IMO, so what am I missing here? As Gerald, I am willing to understand this.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

ruediger

  • Guest
Re: THELI vs PI
« Reply #20 on: 2013 September 27 02:06:09 »
I would like to add my experiences with THELI from beginning of this year. The THELI debayering (PPG) was better, the overall star shape and the measured noise of the stack. See attached pictures. But after checking some of the algorithms that are used during calibration, I'm not sure at all why THELI performs better. Most of the algorithms look very "simple". During my tests, I used most of the THELI defaults:
- master calibration frames generated with median combine with rejection of the maximum value before
- light calibration (DSLR) with darks only, no bias used and no dark scaling (THELI doesn't offer this)
- integration with sigma=4 rejection

BTW:
Usage of flats for weighting obviously make sense when constructing mosaics, when e.g. an underexposed edge of one panel lies inside the wellexposed center of another panel. The edge shouldn't get the same weight as it would become in PixInsight.

RĂ¼diger


Offline georg.viehoever

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2132
Re: THELI vs PI
« Reply #21 on: 2013 September 27 04:42:13 »
Hi Juan,

...I must be missing or not understanding something important, since this procedure is the inverse of a flat fielding operation....

Let's assume for the moment that all pixels in a raw image have them same noise, for instance pixels with true value x have a poission distribution with mean value x variance x. Let's assume that we have two pixels with values of x and y=x/2. y has this value because of vignetting, which would be adjusted by by flat-fielding, multiplying y with flat correction factor f=2. After this y would have the correct mean value x, but its variance (=noise) would be y*f**2=y*4=x/2*2=2x. So pixel y is more noisy than x. Registering the images, the noise estimates for each pixel of course have to be transformed into noise estimates for the pixels of the registered images. Using these noise estimates, the optimimal weights of those pixels during integration would clearly need to be different. The assumption that ImageIntegration makes (weight for all pixels of an image is the same) is simply not true, just because of flatfielding.

There are of course other factors that may contribute to this effect, apart from flat-fielding. On my CanonEOS40D, I have the impression that some pixels are more noisy than others. As a consequence, they also should have a lower weight during integration. That's probably what THELI also tries to do. The difficulty here is probably to do the necessary measurement of  the pixel characteristics, which IMHO would require a lot of calibration shots.

Georg
Georg (6 inch Newton, unmodified Canon EOS40D+80D, unguided EQ5 mount)

Offline Ignacio

  • PixInsight Old Hand
  • ****
  • Posts: 375
    • PampaSkies
Re: THELI vs PI
« Reply #22 on: 2013 September 27 04:49:13 »
Many thanks to all for engaging in this discussion and sorry for not making myself clear.

Let me try again. First off, this thread from the start was not intended as a "competition" between PI and Theli to see which overall soft is "better". Rather, it is a comparison of methods and steps taken by the two approaches, to see if there is anything to be learned/improved.

One of the things I found interesting and worth sharing in this forum, is the notion of Weighting images or matrices. This is my understanding of how it works:
1. a master flat is built in the standard way (by bias/dark calibrating a set of flats, and stacking them).
2. this master flat is used to calibrate subexposures also in the standard way (by division), so that areas of the sensor that get less photons for whatever reason, or that convert less of these into electrons, get properly amplified to compensate for this and recover an "even illumination".
3. then the master flat is normalized to a standard scale (don't know exactly what is the method here), and this becomes the basis of a Global Weight (GW).
4. Here is where the interesting part starts. On the GW one can automatically or manualy set pixel weights to zero (ie, for a bad column, cold pixel, etc.). The GW is the basis for each subexposure Weight, and is copied (as many times as there are subexposures) and associated to each subexposure with a naming convetion.
5. Now, these can be manually or automatically modified to reflect things that apply to a single subexposure, like satellite trails (by setting the pixel weights to zero along the trail diagonal on the Weight image associated with the satellite-trialed subexposure), cosmic hits, etc.
6. These Weight images or matirces are kept paired with each corresponding subexposure (two equal size images), and go thru the same geometric transformations as the paired subexposure when it is registered.
7. When stacking, each (x,y) pixel of a registered sub in a stack is weighted with the value found in the corresponding (x,y) "registered" Weight image.
8. All of the above is very clear for monochrome sensors, where each channel is treated with its corresponding master flat independent of the others. In the case of bayered sensors, Theli handles this by splitting the colors (don't know exactly how, some form of debayering) in the beginning (after calibration), and apparently using the same normalized master flat for the three color channels (not sure about this either, it may split the flat color in the same way as the subs). But the concept is the same.

In this way, each pixel value from each subexposure in the final stack gets the proper weight, that reflects the efficiency of the instrument (optics/camera) consistently.

To add to what Georg just posted. I think his example is exactly right, but I don't think Theli tries to estimate intrinsic pixel noise characteristics beyond the standard calibration. It tries to estimate the instrument efficiency at the pixel level, solely by using a master flat.

I hope this clarifies the discussion.

cheers
Ignacio
 

« Last Edit: 2013 September 27 05:06:28 by Ignacio »

Offline georg.viehoever

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2132
Re: THELI vs PI
« Reply #23 on: 2013 September 27 05:07:43 »
...To add to what Georg just posted. I think his example is exactly right, but I don't think Theli tries to estimate intrinsic pixel noise characteristics beyond the standard calibration. It tries to estimate the instrument efficiency at the pixel level, solely by using a master flat. ...

THELI seems to do the flat fielding on the registered images, which allows it account for pixel defects etc (step 4) as part of flatfielding (=pixel weighting) during the integration. I am not sure how this compares to the pixel rejection in PI . THELI's procedure may also account for the different noise levels due to flat fielding (the idea I outlined in the first paragraph, I would need to look into deeper details to determine this), but it certainly will not account for different pixel characteristics (second paragraph).

Georg
Georg (6 inch Newton, unmodified Canon EOS40D+80D, unguided EQ5 mount)

Offline Ignacio

  • PixInsight Old Hand
  • ****
  • Posts: 375
    • PampaSkies
Re: THELI vs PI
« Reply #24 on: 2013 September 27 05:57:05 »
THELI seems to do the flat fielding on the registered images, which allows it account for pixel defects etc (step 4) as part of flatfielding (=pixel weighting) during the integration. I am not sure how this compares to the pixel rejection in PI .

Georg

Flatfielding usually refers to the standard calibration by division with a reference flat image, not pixel weighting in the sense it is described here (as a variance weight) for final stacking. I clarify this just to make sure we don't get hung up on terminology.

Pixel rejection is something else, which is used on top of pixel weighting, and complement each other. Theli uses it too, of course, during stacking.

Ignacio


Offline georg.viehoever

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2132
Re: THELI vs PI
« Reply #25 on: 2013 September 27 06:29:58 »
Ignacio,

...Flatfielding usually refers to the standard calibration by division with a reference flat image, not pixel weighting in the sense it is described here (as a variance weight) for final stacking. I clarify this just to make sure we don't get hung up on terminology....

Does THELI do a separate flat fielding step, or is this integrated into the pixel weighting during integration? And what exactly do you mean by the term efficiency in "It tries to estimate the instrument efficiency at the pixel level, solely by using a master flat." If you mean the quantum efficiency (responsivity, http://en.wikipedia.org/wiki/Responsivity) of the different pixels: That's already taken care of by flatfielding (in addition to vignetting, dust, ...).

The weighting I am thinking of weights the pixels based on their reliability (=noise or variance). The arrive at integrations with low noise you give a low weight to noisy pixels, and a high weight reliable pixels (low noise). That's not about efficiency, its about noisiness or precision http://en.wikipedia.org/wiki/Accuracy_and_precision  of the values measured by each pixel.

Georg
Georg (6 inch Newton, unmodified Canon EOS40D+80D, unguided EQ5 mount)

Offline Ignacio

  • PixInsight Old Hand
  • ****
  • Posts: 375
    • PampaSkies
Re: THELI vs PI
« Reply #26 on: 2013 September 27 07:02:20 »
Ignacio,

Does THELI do a separate flat fielding step, or is this integrated into the pixel weighting during integration? And what exactly do you mean by the term efficiency in "It tries to estimate the instrument efficiency at the pixel level, solely by using a master flat." If you mean the quantum efficiency (responsivity, http://en.wikipedia.org/wiki/Responsivity) of the different pixels: That's already taken care of by flatfielding (in addition to vignetting, dust, ...).

The weighting I am thinking of weights the pixels based on their reliability (=noise or variance). The arrive at integrations with low noise you give a low weight to noisy pixels, and a high weight reliable pixels (low noise). That's not about efficiency, its about noisiness or precision http://en.wikipedia.org/wiki/Accuracy_and_precision  of the values measured by each pixel.

Georg

My understanding is that Theli does a flat fielding in the traditional way (by dividing subs by the master flat) and at the same time carries in the Weight matrix, a variance weight to be used during final stacking. The first, is a brightness normalization, the second, a noise minimization. Exactly as you explained it in your example.

By efficiency I mean the net effect of all factors in the instrument (optical train, sensor, etc.) that transforms a uniform photon flux, into ADUs. This includes the effects of vignetting, dust, pixel QE, pixel Gain, etc. , everything.

I think we are saying basically the same thing.

cheers
Ignacio

Offline georg.viehoever

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2132
Re: THELI vs PI
« Reply #27 on: 2013 September 27 07:42:36 »
...By efficiency I mean the net effect of all factors in the instrument (optical train, sensor, etc.) that transforms a uniform photon flux, into ADUs. This includes the effects of vignetting, dust, pixel QE, pixel Gain, etc. , everything.

Yes, I think we do. The value of the weighting matrix would then try to adjust for the change in noise levels due to flatfielding, plus some weight modification about defects etc..

Georg
Georg (6 inch Newton, unmodified Canon EOS40D+80D, unguided EQ5 mount)

Offline Ignacio

  • PixInsight Old Hand
  • ****
  • Posts: 375
    • PampaSkies
Re: THELI vs PI
« Reply #28 on: 2013 September 27 07:46:01 »

Yes, I think we do. The value of the weighting matrix would then try to adjust for the change in noise levels due to flatfielding, plus some weight modification about defects etc..

Georg

Exactly!  :)

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: THELI vs PI
« Reply #29 on: 2013 September 27 08:47:37 »
Hi Ignacio,

Quote
First off, this thread from the start was not intended as a "competition" between PI and ...

As long as PI exists, I necessarily have to be in permanent state of competition. It is my responsibility. For most users this is a hobby, and that's great, but I am the guy who pays the invoices, so there is little room for humor :) If I have to work harder to make PI more competitive, rest assured that I'll try to get the job done at any cost.

Let me see if I can understand your description step by step:

Quote
1. a master flat is built in the standard way (by bias/dark calibrating a set of flats, and stacking them).

No problem.

Quote
2. this master flat is used to calibrate subexposures also in the standard way (by division), ...

No problem.

Quote
3. ... the master flat is normalized to a standard scale ...

The scale is---has to be---irrelevant, as long as the same scale is used for the whole data set, and as long as all images are properly normalized, that is, made statistically compatible. Let's assume that this is the case.

Quote
4. ... On the GW one can automatically or manualy set pixel weights to zero (ie, for a bad column, cold pixel, etc.). ...

This is just cosmetic corrections and defect maps. Nothing special.

Quote
5. Now, these can be manually or automatically modified to reflect things that apply to a single subexposure, like satellite trails (by setting the pixel weights to zero along the trail diagonal on the Weight image associated with the satellite-trialed subexposure), cosmic hits, etc.

Well, we prefer to implement pixel rejection on a more solid statistical basis (and we are working on new pixel rejection methods to improve out toolset in this field). Anyway, this is just rejection, so still nothing special.

Quote
6. These Weight images or matirces are kept paired with each corresponding subexposure (two equal size images), and go thru the same geometric transformations as the paired subexposure when it is registered.

Ok, we are now getting closer to the core of the problem. So we have:

F = the master flat.

M = a scaled version of F, with some pixels set to zero to implement cosmetic correction, defect map, "rejection", etc.

I = the calibrated image being considered.

G() = a geometric transformation (essentially the solution to an image registration problem).

then we have:

M' = G(M)
I' = G(I)

Quote
7. When stacking, each (x,y) pixel of a registered sub in a stack is weighted with the value found in the corresponding (x,y) "registered" Weight image.

Weighting is scaling, i.e. multiplication, so:

I'' = I' * M' = G(I) * G(M)

where I'' is the registered and weighted image. Assuming that G() is a transformation that maps image coordinates (for example, an affine transformation, a homography, etc.), it is clear that:

G(x) * G(y) = G( x*y )

where '*' represents element-wise scalar multiplication. In simple words: in purely geometric terms and neglecting interpolation and roundoff errors, multiplying two images and transforming the result is equivalent to transforming each image and multiplying the transformed images. So why not apply the weighting operation before registration:

I'' = G( I * M )

Then the problem is that I has been fully calibrated. Among other things, I has been divided by a normalized version of the master flat frame F, which is essentially the same object that we call M above. I still don't see the point of this procedure. I am ready to stand corrected and learn something new here, but either I am overlooking something essential (and probably pretty obvious except for me), or there is "something more" that is not being exposed.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/