Author Topic: THELI vs PI  (Read 21623 times)

Offline Ignacio

  • PixInsight Old Hand
  • ****
  • Posts: 375
    • PampaSkies
Re: THELI vs PI
« Reply #30 on: 2013 September 27 09:09:35 »
I think you almost got it, Juan.

The final step is not about scaling the levels, as in your I'' = I' * M', it is about weighting the pixel value "credibility" (variance) in the pixel stack. As PI does for the whole image during ImageIntegration by means of Noise Evaluation. Pixels in I' with a low associated M' element, are weighted less in the final weighted average of the pixel stack.

Ignacio

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: THELI vs PI
« Reply #31 on: 2013 September 27 09:44:06 »
Quote
Pixels in I' with a low associated M' element, are weighted less in the final weighted average of the pixel stack

... so pixels in I' are multiplied by the corresponding pixels of M'. Weighting is multiplying. which leads to I'' = I' * M' (or some transformation of this equation)...
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Ignacio

  • PixInsight Old Hand
  • ****
  • Posts: 375
    • PampaSkies
Re: THELI vs PI
« Reply #32 on: 2013 September 27 10:35:31 »
Bear with me, Juan, I am starting to get a headache  :smiley:, but I would like to get to the bottom of this.

Let me try it this way, using your notation:

For a given pixel in the (x,y) position after registration, and for K subexposures, you have that (weighted average):

estimated I''(x,y) = sum for k=1 to K of ( I'(x,y)_k * M'(x,y)_k) / sum for k=1 to K (M'(x,y)_k)

Now, if the divisor, sum for k=1 to K (M'(x,y)_k), is normalized to 1 for all (x,y)s, then it appears that one could write this as I'*M', but with a different M' that no longer represents an affine transformation of the original master flat, right?

Ignacio


Offline Ignacio

  • PixInsight Old Hand
  • ****
  • Posts: 375
    • PampaSkies
Re: THELI vs PI
« Reply #33 on: 2013 September 27 10:51:08 »
Sorry, one more thing I forgot to mention, that may clarify.

When you equate Cosmetic Correction and Defect Map to how it works with Weight matrices, it is somewhat different. In Pi it is handled on each subexposure at a time, and the correction is done based on the values of neighboring pixels for that given subexpo. With the Weight matrices, those bad pixels just don't count (weight=0) in the final stack.

Ignacio

Offline mschuster

  • PTeam Member
  • PixInsight Jedi
  • *****
  • Posts: 1087
Re: THELI vs PI
« Reply #34 on: 2013 September 27 11:27:48 »
Currently, when pixels are scaled to match frame dispersions, their weights are scaled likewise in the integration. But when pixels are scaled by flats, their weights are not scaled. Should all pixel scalings be reflected in weights?

If so, my integrations are multi-night, with a new master flat on each night. Some sort of multiple master flat normization seems necessary, I am not sure how.

Mike

Offline georg.viehoever

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2132
Re: THELI vs PI
« Reply #35 on: 2013 September 28 03:07:51 »
...Now, if the divisor, sum for k=1 to K (M'(x,y)_k), is normalized to 1 for all (x,y)s, then it appears that one could write this as I'*M', but with a different M' that no longer represents an affine transformation of the original master flat, right?...
I think that is the key point. While it does not really matter at which point in time you appy M_k to I_k (before or after applying G_K), in integration you need to divide by  "sum for k=1 to K (M'(x,y)_k)" to get meaningful results. Thats why you need the transformed M_k of the individual images.

Does PI have a pixelwise weighting? So far I believed that ImageIntegration only weights full images by a per image factor, not by individual per pixel factors.
Georg
Georg (6 inch Newton, unmodified Canon EOS40D+80D, unguided EQ5 mount)

Offline Ignacio

  • PixInsight Old Hand
  • ****
  • Posts: 375
    • PampaSkies
Re: THELI vs PI
« Reply #36 on: 2013 September 28 07:58:28 »
I think that is the key point. While it does not really matter at which point in time you appy M_k to I_k (before or after applying G_K), in integration you need to divide by  "sum for k=1 to K (M'(x,y)_k)" to get meaningful results. Thats why you need the transformed M_k of the individual images.

Georg

Exactly!

Now, taking this concept a step further for bayered sensors, and this is not Theli anymore, just an idea, one could use such weight matrices to carry indiviudual color information of each bayer pane thru registration and stacking, postponing the color combine till after coaddition.

Something like this:
1. calibrate raw CFA data as usual
2. build the corresponding weighting matrices based on the master flat
3. split the weighting matrices in three "colors" by making three copies of each one, and zeroing the corresponding pixels in the weight RGGB array. You will end up with three weighting matrices for each calibrated, CFA raw subexposure: R00, 0GG0, and 00B, so to speak. Lets call them WR, WG, and WB.
4. Make the necessary geometric transformations (registration) to each set of 4 images. How you get the geometry is an issue, since SA doesn't seem to work well on raw CFA images. One may need to debayer an auxiliary luminance frame to get the geometry, and then apply it. You end up with, say, RAWr, WRr, WGr, and WBr (r for registered).
5. Take RAWr and WRr for all subexposures and stack them. Hopefully, there will be no holes. If there are, apply some interpolation now, or wait till color combine (probably better to make use of inter-channel info) to interpolate and fill the holes. Repeat for RAWr, WGr, and RAWr, WBr.
6. Color combine, taking into account the quality of the color information on each pixel. I suspect this will require some form of pixel normalization, to get even colors across the frame.
 
 So, no need to debayer! And the color combination and hole filling is done when the noise is low, and rgb color information covers most of the pixels if not all (particularly if an adequate dithering program is applied). Note also, that until the last step only the actual raw data is carried thru.

Am I making sense?

Ignacio

Offline Carlos Milovic

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2172
  • Join the dark side... we have cookies
    • http://www.astrophoto.cl
Re: THELI vs PI
« Reply #37 on: 2013 September 28 08:09:48 »
Debayering in such sense makes sense only in a drizzle or superresolution scheme. IMO.

Which is the standard deviation of small samples of the flat? I'm still not convinced that differences due to pixel sensibility are large enough to make a significant impact on SNR for relatively large sets and/or "small" shifts (a few pixels).
Regards,

Carlos Milovic F.
--------------------------------
PixInsight Project Developer
http://www.pixinsight.com

Offline Ignacio

  • PixInsight Old Hand
  • ****
  • Posts: 375
    • PampaSkies
Re: THELI vs PI
« Reply #38 on: 2013 September 28 09:00:00 »
Which is the standard deviation of small samples of the flat?

In a center crop of about 250x250 pixels of a master flat, with no visible dust shadows, nor obvious hot/cold pixels, built from 30 calibrated subs (with a 200-frame master bias), I read a std deviation of about 0.5% of the mean. If a small dust shadow is included in the crop, it grows to about 0.8%.

Ignacio

Offline Carlos Milovic

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2172
  • Join the dark side... we have cookies
    • http://www.astrophoto.cl
Re: THELI vs PI
« Reply #39 on: 2013 September 28 17:00:37 »
How many subs? I ask, because if we assume just photon noise, a single exposure should have a stddev of nearly 180 adu for that intensity, and you are seeing something like 160 adu. I still havedoubs if this measurement is dominated by noise, or real pixel differences of QE.
Regards,

Carlos Milovic F.
--------------------------------
PixInsight Project Developer
http://www.pixinsight.com

Offline Ignacio

  • PixInsight Old Hand
  • ****
  • Posts: 375
    • PampaSkies
Re: THELI vs PI
« Reply #40 on: 2013 September 28 21:58:23 »
Hi Carlos,

Let me give more info on the stats of my flat frames. My canon is a 12-bit 1000D, so well acquired flat frames average about 2000 ADUs on each channel (with some variations due to light source and sensor spectral response). Mine range between 1900 and 2200 typically, so pretty good.

A single flat with around 2000 ADU level shows a std deviation in a center crop, of about 45-50 ADUs. A master-bias calibrated flat, shows a std deviation of about 35-40 ADUs. And a stack of 30 calibrated flat frames takes it down to about 12-14 ADUs.

Now, the residual photon noise in the master flat, in theory, should be around sqrt(2000/30) = 8 ADUs, so what is left (assuming no correlation) is 9-11.5 ADUs.  This can be attributed to imperfect bias calibration, dark current (thou exposures are short:  1/6 sec., and at low sensor temp: -7°C), and pixel sensitivity variations (for whatever reason, including dust particles, variations in pixel microlensing, QE, electronic gain, etc.). This is hardly random, since upon visual inspection, it clearly shows some structure.

This is the 0.5% noise level I was referring to (ie, ~10/2000).

best
Ignacio
« Last Edit: 2013 September 28 22:19:25 by Ignacio »

Offline Ignacio

  • PixInsight Old Hand
  • ****
  • Posts: 375
    • PampaSkies
Re: THELI vs PI
« Reply #41 on: 2013 September 30 04:25:35 »
Hi,

I took the liberty of contacting Mischa Schirmer, one of the developers of Theli, to ask him about how exactly Theli handled bayered data, and how significant was the effects of pixel noise weighting in the final result. He kindly answered with a detailed and clear response, which I thought interesting and worthwhile sharing in this thread, with his permission.

"
>Specifically, at what point in the process are the raw images debayered (split into rgb colors)?

After they have been calibrated by the flat-field.

The flat-field itself is "homogenized" for detectors with a bayermatrix.
For each individual flat exposure, Theli calculates an average "RGGB"
2x2 pixel cluster, and then divides all such clusters in the flat by
this average cluster. In this way I remove the variation in spectral
sensitivity between color pixels, i.e. the tri-modal histogram becomes a
unimodal histogram, from which one can calculate meaningful statistical
values. Otherwise the image statistics can become highly unstable
causing problems at various places later on.

If you look at the combined master flat, you will see that the
characteristic Bayer pattern is largely suppressed.

This method also has another advantage: The debayering itself of the
target images (once they are flat-fielded) works better. Imagine you
took a twilight flat field (which is rather blue), then the red pixels
would have very low (and thus noisy) values. When you then flat-field
your target exposures, the red pixels may get very high values (because
they are divided by a low number), which can mess with various
debayering algorithms. I haven't done this analysis myself, but two of
my long-term users with DSLR cameras and color CCDs have spent a lot of
time investigating this. They also wrote the debayering program using
latest optimized algorithms, which I have implemented in Theli.

This master flat also becomes the global weight. It is difficult to
define a meaningful weighting process for Bayer data. Ideally, the
weight should reflect the relative sensitivities of pixels to each
other, responding to the same source of light. However, this is not the
case for color detectors, as the pixels are individually filtered and
thus are no longer comparable. The homogenization process as described
above removes most of this spectral dependence, and should e.g. reflect
a vignetting in your data nicely in the weight maps.

One might certainly argue about better weighting schemes for bayered
data, i.e. treat all red, green and blue pixel groups independently, but
that would create a hell of a lot of rewriting for all subsequent
software that uses the weights (object detection, resampling,
coaddition). In my experience the effect of the weight maps is usually
not much visible in a typical amateur astronomer picture, unless e.g.
vignetting is very significant and a large mosaics has been made.

Should you have any further ideas about this I'll be happy to discuss
them, but presently I think the weighting for Bayer data cannot be
improved much beyond this point.

> Is it split into rgb colors too, and applied independently to each color channel?

Theli did that a long time, together with a lot of other trial and error
for bayered data, but the approach as shown above appears to yield best
results.

>mischa
"

My conclusions from his response are that the effects of weighting maps are "not much visible in a typical amateur astronomer picture", and that there is value in "homogenizing"  a master flat with a color cast (I usually do this thru a special function in Fitsworks, although I am not sure if its done as Mischa described here.)

cheers
Ignacio




Offline georg.viehoever

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2132
Re: THELI vs PI
« Reply #42 on: 2013 October 05 14:25:18 »
Hey Juan, Carlos,
I think with its pixel based weighting THELI does something that is entirely different from anything we can do with PI today, and from what I see in DSLRs there seems to be some good reason to do it. Any plans to have a closer look at that for PI?
Georg
Georg (6 inch Newton, unmodified Canon EOS40D+80D, unguided EQ5 mount)

Offline mschuster

  • PTeam Member
  • PixInsight Jedi
  • *****
  • Posts: 1087
Re: THELI vs PI
« Reply #43 on: 2013 October 09 11:49:13 »
In StarAlignment interpolation, weighted combinations of pixels are calculated, using weights given by an interpolation kernel. The input pixels of course were scaled by flatting, and so their noises were scaled too (both read and photon noise). Does it make sense to include a per pixel scaled noise weighting term into the interpolation, in conjunction with the kernel weights? I am thinking that ImageIntegration noise weights pixels in the stack, and so should StarAlignment do the same thing in the local interpolation neighborhood?
Thanks,
Mike

Offline Ignacio

  • PixInsight Old Hand
  • ****
  • Posts: 375
    • PampaSkies
Re: THELI vs PI
« Reply #44 on: 2013 October 10 06:29:15 »
In StarAlignment interpolation, weighted combinations of pixels are calculated, using weights given by an interpolation kernel. The input pixels of course were scaled by flatting, and so their noises were scaled too (both read and photon noise). Does it make sense to include a per pixel scaled noise weighting term into the interpolation, in conjunction with the kernel weights?

This makes sense to me, and could make a difference in areas where there are near cold/hot pixels not picked up during CC. This also goes for debayering interpolation.

I am thinking that ImageIntegration noise weights pixels in the stack, and so should StarAlignment do the same thing in the local interpolation neighborhood?

ImageIntegration does full-frame noise weighting, not pixel-based noise weighting (not counting outlier rejection), hence this thread.

Ignacio