Questions about Generalized Extreme Studentized Deviate Test rejection

johnpane

Well-known member
I have three questions about the ESD rejection algorithm.

1) Assuming ESD low relaxation is set to 1 (presumably no effect), if outliers is set to 0.3, does that mean:
a) that 30% of the total pixel stack can be rejected by the combined effects of high clipping and low clipping?
b) that 30% of the total pixel stack can be rejected by high clipping and another 30% by low clipping?
c) that 30% of the above-median pixels can be rejected by high clipping, and 30% of the below-median pixels can be rejected by low clipping?
d) that 15% of the above-median pixels can be rejected by high clipping, and 15% of the below-median pixels can be rejected by low clipping?

Option (a) is implied by the tooltip, "The default value is 0.3, which allows the algorithm to detect up to a 30% of outlier pixels in each pixel stack." But this would mean that high clipping would be affected by turning on or off low clipping, which would certainly be an undesirable effect.

2) Is the ESD significance test applied to:
a) each individual potential outlier?
b) the entire pixel stack?

Option (b) is implied by the tooltip, reworded here for clarity "0.01 means that a 1% chance of being wrong is acceptable when rejecting the null hypothesis (that there are no outliers in a given pixel stack)." That is, the tooltip implies that the test is all or nothing, either there are outliers that can be rejected or there are not. This would seem to give the user no control over how many outliers can be rejected. If there is one clear outlier with p<.000001, the null is rejected but what other outlier get rejected in this case?

3) What parameter is relaxation applied to:
a) a divisor for the outliers parameter?
b) a divisor for the significance parameter?
c) something else?

I see no hint in the tooltip. But if the answer is (a), how does that interact with the answer to (1) above?

Thank you,
John
 
Hello, I am still hoping to learn the answers to these questions if anyone knows. Thank you.
 
Hi John,

Sorry for the delay in getting back to you.

1) Assuming ESD low relaxation is set to 1 (presumably no effect), if outliers is set to 0.3, does that mean:
a) that 30% of the total pixel stack can be rejected by the combined effects of high clipping and low clipping?

ESD rejection doesn't work like sigma clipping and similar rejection methods, so the concepts of 'low' and 'high' pixels are not directly portable. The answer to this question is approximately a).

Option (a) is implied by the tooltip, "The default value is 0.3, which allows the algorithm to detect up to a 30% of outlier pixels in each pixel stack." But this would mean that high clipping would be affected by turning on or off low clipping, which would certainly be an undesirable effect.

Actually, the ESD algorithm does not 'know' whether it is rejecting low or high pixels. It works on absolute values with respect to a central value (which is not the median except for very small stacks) without taking signs into account. We introduce a relaxation factor for pixels below the central value in each iteration, which conditions the algorithm's behavior as a whole, but the decision to reject or not high/low pixels as a function of process parameters is taken at the very end of the process, when all outliers have already been detected.

This can be seen more clearly in our implementation. This is the main rejection loop:


and here is when low and high pixels are actually rejected depending on process parameters, that is, on whether low and/or high clipping is enabled:


As you can see, both sections of the algorithm are separate and independent each other.

2) Is the ESD significance test applied to:
a) each individual potential outlier?
b) the entire pixel stack?

The generalized ESD test algorithm is quite well described here:


If I've understood it well, the answer to your question would be a mix of a) and b). The test is repeated for the whole stack, but removing potential outliers at each successive iteration to compute the next R_i test statistic.

Note that our implementation introduces two important changes to the original algorithm that make the ESD rejection method more robust and versatile. For robustness, we don't use the mean and the standard deviation of the sample, but a trimmed mean and the standard deviation of the trimmed sample at each iteration. In all of our tests the algorithm behaves much more consistently with this variation. For versatility, we introduce a relaxation factor that multiplies the standard deviation (the s variable in the algorithm description above) for pixels with values smaller than the trimmed mean. This allows us to apply a more tolerant rejection for dark pixels.

That is, the tooltip implies that the test is all or nothing, either there are outliers that can be rejected or there are not.

This is not what this tooltip tries to communicate. The ESD significance parameter does not define a Boolean condition (reject or don't reject) over the whole stack. It works as a limit to compute critical values (lambda_i variables), which are compared to test statistics (R_i) in order to find the number of outliers. By increasing the ESD significance parameter more pixels will be rejected because the algorithm will allow more mistakes made by rejecting the null hypothesis (no outliers in the stack). It's a bit convoluted but that's how it works.

The ESD outliers parameter defines an upper bound for the number of outliers that can be rejected in the sample. For example, for a stack of 10 pixels, if ESD outliers is equal to 0.3 this means that 0, 1, 2 or 3 outliers can be detected. Of course, this also means that in case there were an additional fourth outlier, it would pass unnoticed.

3) What parameter is relaxation applied to:
a) a divisor for the outliers parameter?
b) a divisor for the significance parameter?
c) something else?

The answer is c). As I've noted above, the relaxation parameter multiplies the standard deviation of the (trimmed) sample at each iteration to compute test statistics for pixels with values smaller than the trimmed mean. Since the low relaxation parameter is >= 1, it causes the algorithm to be more tolerant for rejection of low pixels, since the algorithm 'sees' a higher dispersion for these low pixels. In a sense, what we are doing here is telling lies to the ESD test for a subset of pixels where we want less rejection.
 
Thank you very much for taking the time to reply, Juan. I understand much better now, though perhaps not perfect yet. I apologize if there are misunderstandings / errors in the following comments.

1. It seems important for users to realize that adjusting the low relaxation can change the number of high pixels rejected as outliers. One reason for this may be readily apparent; if more low outliers are tolerated more high pixels can be rejected before the allowed fraction of outliers is exhausted. But the second reason is less obvious; the relaxation factor is applied not only to the standard deviation for calculating low outliers, but also causes fewer low pixels to be trimmed in the trimmed mean calculation (line 925). This shifts the central value lower, having the effect of making high pixels more likely to appear as outliers, as well as further decreasing the number of low outliers (on top of the parameter's effect of as a multiplication factor on sd). Finally, there is a third effect, also less obvious; if less low pixels are trimmed, the standard deviation for high rejection will increase, making fewer pixels qualify as outliers and somewhat counteracting the second effect. The combined effect on high rejection is unclear. I wonder if both the second and third effects should be avoided by not considering low relaxation in determining trimming?

For the tool tip for low relaxation, it would be helpful for users to know that it is a multiplier for the sd so that they know what scale they are operating on. And if this parameter's effect on trimming is retained, I suggest that should also be made evident. The tooltip should also perhaps warn users that adjusting this parameter will affect not just low rejection but also high rejection, Finally, somewhere it should be made evident that turning on/off low (or high) rejection has no effect on determining which pixels are outliers and just on whether those outliers are rejected or retained. This has the surprising implication that, if low rejection is turned off, low relaxation still will affect high rejection.

2. In your example stack of 10 pixels, with ESD outliers equal to 0.3, my understanding is that three statistical tests are performed. The null hypothesis for the first test is that there are no outliers. If that null is rejected, one pixel is deemed an outlier. The null hypothesis for the second test is that there is no more than 1 outlier. If that null is rejected, a second pixel is deemed an outlier. Finally, the third null hypothesis is that there are no more than 2 outliers and if that null is rejected a third pixel is deemed an outlier. As such, I think the simplest way to explain alpha (ESD significance) in the tooltip is the probability that too many pixels are deemed outliers.
 
Last edited:
the relaxation factor is applied not only to the standard deviation for calculating low outliers, but also causes fewer low pixels to be trimmed in the trimmed mean calculation (line 925).

Yes, but this is intentional, not a mistake IMO, despite the side effects that you have described. What we are doing with the ESD low relaxation parameter is to introduce a priori knowledge in the pixel rejection problem: that the probability of low outliers is systematically lower than the probability of high outliers for all pixel stacks. Hence we are modifying the expected statistical model that we are using for each sample of pixels.

The trimmed mean that we are using in line 926 provides a robust estimate of location for the pixel stack (side note: we use a trimmed mean instead of the median because we want to achieve both robustness to outliers and minimal standard error here). To be coherent with the modified model I've described above, we have to apply it in all relevant stages of the process: not just when we apply it to scale deviations from the central value (line 935), but also when we compute the central value estimate (line 926).

It is true that we are altering rejection of high pixels when we do this, but this side effect has little practical relevance, especially considering that optimal rejection parameters, when necessary, have to be found by manual trial-error work. Despite this, our tests have shown that our implementation of the ESD rejection algorithm can work remarkably well with its default parameter values in most cases.

It seems important for users to realize that adjusting the low relaxation can change the number of high pixels rejected as outliers.

IMO this is the type of information to be provided in the technical documentation of the ImageIntegration tool (which I have to write ASAP, since the current one is severely outdated), but I doubt this can be of special interest in a tooltip text. Anyway, saying this in the tooltip isn't difficult, so we can add it.

This has the surprising implication that, if low rejection is turned off, low relaxation still will affect high rejection.

I agree that this can indeed be surprising, but as I've said above, the ESD low relaxation parameter introduces a priori knowledge in the whole rejection problem. In other words, these side effects are consequences of the modified model used to describe how outliers are distributed in each pixel stack. Of course we could disable ESD low relaxation when low rejection is disabled, but I'm not sure if introducing a 'special case' like this would really be a good idea. Something to think on, anyway.

As such, I think the simplest way to explain alpha (ESD significance) in the tooltip is the probability that too many pixels are deemed outliers.

Agreed. I'll try to make these tooltips easier to understand, thank you for these insights and suggestions.
 
One other thought..

At the default settings the effect of the ESD outliers parameter seems likely to be far greater through its effect on the trimmed mean function than throught limiting the number of rejected pixels. Essentially, at 0.3 it is trimming 30% of the high pixels and 20% of the low to compute the mean of 50% of the pixels. However, it seems very unlikely that so many pixels would pass the significance test that the 30% cap on rejection would come into play.

As such, if a user wants to reject fewer pixels, the alpha parameter has much greater leverage but the tooltips as written might lead them to focus on reducing the outliers parameter. They may would have to reduce it very substantially, perhaps down to 5 or 10% before they see any effect, and such a reduction would of course vastly reduce the amount of trimming. Again, it makes me wonder why those two functions (amount of trimming to find the central value, and upper bound on the percentage of pixels rejected) are tied to the same parameter.
 
Sorry to keep adding more thoughts.

Users not well-versed in statistics may not realize just how much leverage the relaxation parameter has. With the default settings, alpha=.05 and that means that high pixels are rejected if they are about 2 sd or more from the central value assuming a stack of 60 pixels. With relaxation of 1.5, low pixels have to be about 3 sd or more from the central value, making alpha≈.002 for low rejection. If users bump relaxation up to 2, they are effectively setting alpha for low rejection to ≈.00001, if they set it to 3 effective alpha≈0.00000001 (six sigma, but using a t-distribution with 60df), and if they set it to 5 they are essentially setting alpha to zero for low rejection.

I wonder if the interface could display some calculations? Once the input frames have been loaded, n is known. The tool could calculate alpha_high and alpha_low (the sum of which will be lower than ESD significance unless relaxation = 1), the range of pixel rankings that will used for the trimmed mean calculation, and maybe more.

For example, assuming n=60, outliers=0.3, and significance=.05:

relaxation 1 -> alpha_high=0.025, alpha_low=0.025, trimmed mean calculated on pixels 19..42 (sorted low to high)
relaxation 1.5 -> alpha_high=0.025, alpha_low=0.002, trimmed mean calculated on pixels 13..42
relaxation 2 -> alpha_high=0.025, alpha_low=00001, trimmed mean calculated on pixels 10..42
relaxation 3 -> alpha_high=0.025, alpha_low=0.00000001, trimmed mean calculated on pixels 7..42
relaxation 5 -> alpha_high=0.025, alpha_low≈0, trimmed mean calculated on pixels 4..42

If I think 3 pixels in my 60-pixel stack seem reasonable to reject, I can get the trimmed mean to exclude them by setting relaxation to 5 but by doing that I virtually ensure they will NOT be rejected because of the effective alpha.
 
@Juan Conejero, I have been experimenting with large-scale pixel rejection to remove satellite trails in the same witch head dataset you have been using to test LocalNormalization. I am currently using the ESD rejection algorithm and having difficulty getting it to reject enough pixels. (My understanding is that the large-scale rejection depends on the pixels already rejected by the pixel-rejection algorithm.)

In several iterations I have increased all three parameters to attempt to get more bright pixels rejected, most recently, esd_outliers=0.25 esd_alpha=0.25 esd_low=1.80, yet I am still getting relatively few pixels rejected and the satellite trails are still strongly apparent in the integrated image. (I am just showing a crop of the full integration here.) Do you think I rejecting enough pixels, or that I should set esd_alpha even higher? Or maybe the lsr parameters are not correct for this dataset?

Screen Shot 2022-01-11 at 17.55.03.jpg

Code:
[2022-01-11 02:06:00] Integration of 142 images:
[2022-01-11 02:06:00] Pixel combination .................. Average
[2022-01-11 02:06:00] Output normalization ............... None
[2022-01-11 02:06:00] Weighting mode ..................... Custom keyword: NWEIGHT
[2022-01-11 02:06:00] Scale estimator .................... Biweight midvariance
[2022-01-11 02:06:00] Pixel rejection .................... Generalized extreme Studentized deviate
[2022-01-11 02:06:00] Rejection normalization ............ None
[2022-01-11 02:06:00] Rejection clippings ................ low=yes high=yes
[2022-01-11 02:06:00] Rejection parameters ............... esd_outliers=0.25 esd_alpha=0.25 esd_low=1.80
[2022-01-11 02:06:00] Large-scale rejection clippings .... low=no high=yes
[2022-01-11 02:06:00] Large-scale rejection parameters ... lsr_layers_low=2 lsr_grow_low=2 lsr_layers_high=2 lsr_grow_high=2
[2022-01-11 02:06:00]
[2022-01-11 02:06:00] * Available physical memory: 58.806 GiB
[2022-01-11 02:06:00] * Allocated pixel buffer: 6388 rows, 32.358 GiB
[2022-01-11 02:06:00] * Using 771 concurrent pixel stack(s), 11.744 GiB
[2022-01-11 02:06:00]
[2022-01-11 02:06:00] Analyzing pixel rows:     0 ->  6387: done
[2022-01-11 02:25:38] Generating large-scale high rejection maps: done
[2022-01-11 02:35:48] Integrating pixel rows:     0 ->  6387: done
[2022-01-11 02:37:43]
[2022-01-11 02:37:43] Pixel rejection counts:
[2022-01-11 02:37:43] /Users/pane/NOT BACKED UP/ic2118 witch head new workflow/nsg/w100_NGC1909_Light_120_secs_101_c_cc_d_R_r_nsg.xisf
[2022-01-11 02:37:43]     1 :     33375   0.055% (        0 +     33375 =   0.000% +   0.055%)
[2022-01-11 02:37:43] /Users/pane/NOT BACKED UP/ic2118 witch head new workflow/nsg/w104_NGC1909_Light_120_secs_088_c_cc_d_R_r_nsg.xisf
[2022-01-11 02:37:43]     2 :     29731   0.049% (        0 +     29731 =   0.000% +   0.049%)
[2022-01-11 02:37:43] /Users/pane/NOT BACKED UP/ic2118 witch head new workflow/nsg/w103_NGC1909_Light_120_secs_096_c_cc_d_R_r_nsg.xisf
[2022-01-11 02:37:43]     3 :     33532   0.055% (        2 +     33530 =   0.000% +   0.055%)
[2022-01-11 02:37:43] /Users/pane/NOT BACKED UP/ic2118 witch head new workflow/nsg/w102_NGC1909_Light_120_secs_095_c_cc_d_R_r_nsg.xisf
[2022-01-11 02:37:43]     4 :     31211   0.051% (        0 +     31211 =   0.000% +   0.051%)
[2022-01-11 02:37:43] /Users/pane/NOT BACKED UP/ic2118 witch head new workflow/nsg/w101_NGC1909_Light_120_secs_082_c_cc_d_R_r_nsg.xisf
[2022-01-11 02:37:43]     5 :     29717   0.049% (        0 +     29717 =   0.000% +   0.049%)
[2022-01-11 02:37:43] /Users/pane/NOT BACKED UP/ic2118 witch head new workflow/nsg/w101_NGC1909_Light_120_secs_087_c_cc_d_R_r_nsg.xisf
[2022-01-11 02:37:43]     6 :     30977   0.051% (        1 +     30976 =   0.000% +   0.051%)
[2022-01-11 02:37:43] /Users/pane/NOT BACKED UP/ic2118 witch head new workflow/nsg/w100_NGC1909_Light_120_secs_100_c_cc_d_R_r_nsg.xisf
[2022-01-11 02:37:43]     7 :     29743   0.049% (        0 +     29743 =   0.000% +   0.049%)
[2022-01-11 02:37:43] /Users/pane/NOT BACKED UP/ic2118 witch head new workflow/nsg/w100_NGC1909_Light_120_secs_081_c_cc_d_R_r_nsg.xisf
[2022-01-11 02:37:43]     8 :     33372   0.055% (        1 +     33371 =   0.000% +   0.055%)
[2022-01-11 02:37:43] /Users/pane/NOT BACKED UP/ic2118 witch head new workflow/nsg/w100_NGC1909_Light_120_secs_054_c_cc_d_R_r_nsg.xisf
[2022-01-11 02:37:43]     9 :     38475   0.063% (        0 +     38475 =   0.000% +   0.063%)
[2022-01-11 02:37:43] /Users/pane/NOT BACKED UP/ic2118 witch head new workflow/nsg/w100_NGC1909_Light_120_secs_103_c_cc_d_R_r_nsg.xisf
[2022-01-11 02:37:43]    10 :     44060   0.072% (        0 +     44060 =   0.000% +   0.072%)
...

[2022-01-11 02:37:43] /Users/pane/NOT BACKED UP/ic2118 witch head new workflow/nsg/w035_NGC1909_Light_120_secs_133_c_cc_d_R_r_nsg.xisf
[2022-01-11 02:37:43]   132 :    984830   1.610% (      773 +    984057 =   0.001% +   1.609%)
[2022-01-11 02:37:43] /Users/pane/NOT BACKED UP/ic2118 witch head new workflow/nsg/w034_NGC1909_Light_120_secs_040_c_cc_d_R_r_nsg.xisf
[2022-01-11 02:37:43]   133 :   1149155   1.879% (     1385 +   1147770 =   0.002% +   1.876%)
[2022-01-11 02:37:43] /Users/pane/NOT BACKED UP/ic2118 witch head new workflow/nsg/w033_NGC1909_Light_120_secs_064_c_cc_d_R_r_nsg.xisf
[2022-01-11 02:37:43]   134 :   1307026   2.137% (     1980 +   1305046 =   0.003% +   2.133%)
[2022-01-11 02:37:43] /Users/pane/NOT BACKED UP/ic2118 witch head new workflow/nsg/w032_NGC1909_Light_120_secs_057_c_cc_d_R_r_nsg.xisf
[2022-01-11 02:37:43]   135 :   1298418   2.123% (     1843 +   1296575 =   0.003% +   2.120%)
[2022-01-11 02:37:43] /Users/pane/NOT BACKED UP/ic2118 witch head new workflow/nsg/w029_NGC1909_Light_120_secs_145_c_cc_d_R_r_nsg.xisf
[2022-01-11 02:37:43]   136 :   2769181   4.527% (     7349 +   2761832 =   0.012% +   4.515%)
[2022-01-11 02:37:43] /Users/pane/NOT BACKED UP/ic2118 witch head new workflow/nsg/w027_NGC1909_Light_120_secs_144_c_cc_d_R_r_nsg.xisf
[2022-01-11 02:37:43]   137 :   2423193   3.961% (     9570 +   2413623 =   0.016% +   3.946%)
[2022-01-11 02:37:43] /Users/pane/NOT BACKED UP/ic2118 witch head new workflow/nsg/w023_NGC1909_Light_120_secs_034_c_cc_d_R_r_nsg.xisf
[2022-01-11 02:37:43]   138 :   2586939   4.229% (    21665 +   2565274 =   0.035% +   4.194%)
[2022-01-11 02:37:43] /Users/pane/NOT BACKED UP/ic2118 witch head new workflow/nsg/w022_NGC1909_Light_120_secs_151_c_cc_d_R_r_nsg.xisf
[2022-01-11 02:37:43]   139 :   2647511   4.328% (    23650 +   2623861 =   0.039% +   4.289%)
[2022-01-11 02:37:43] /Users/pane/NOT BACKED UP/ic2118 witch head new workflow/nsg/w021_NGC1909_Light_120_secs_074_c_cc_d_R_r_nsg.xisf
[2022-01-11 02:37:43]   140 :   2534268   4.143% (    26733 +   2507535 =   0.044% +   4.099%)
[2022-01-11 02:37:43] /Users/pane/NOT BACKED UP/ic2118 witch head new workflow/nsg/w021_NGC1909_Light_120_secs_139_c_cc_d_R_r_nsg.xisf
[2022-01-11 02:37:43]   141 :   2941686   4.809% (    25823 +   2915863 =   0.042% +   4.767%)
[2022-01-11 02:37:43] /Users/pane/NOT BACKED UP/ic2118 witch head new workflow/nsg/w016_NGC1909_Light_120_secs_045_c_cc_d_R_r_nsg.xisf
[2022-01-11 02:37:43]   142 :   4213094   6.887% (    73922 +   4139172 =   0.121% +   6.767%)
[2022-01-11 02:37:43]
[2022-01-11 02:37:43] Total :  50275985   0.579% (   242826 +  50033159 =   0.003% +   0.576%)
 
I further increased rejection parameters as follows:

Code:
[2022-01-12 13:00:56] Rejection parameters ............... esd_outliers=0.25 esd_alpha=0.32 esd_low=1.30
[2022-01-12 13:00:56] Large-scale rejection clippings .... low=no high=yes
[2022-01-12 13:00:56] Large-scale rejection parameters ... lsr_layers_low=2 lsr_grow_low=2 lsr_layers_high=2 lsr_grow_high=4

This yielded more rejection but the satellite trails are still about the same.

Code:
[2022-01-12 13:42:18] Total :  57835478   0.666% (  6668776 +  51166702 =   0.077% +   0.589%) [RED]
[2022-01-12 14:23:59] Total :  52439607   0.604% (  5381670 +  47057937 =   0.062% +   0.542%) [GREEN]
[2022-01-12 15:09:19] Total :  45514979   0.524% (  4097562 +  41417417 =   0.047% +   0.477%) [BLUE]
 
Hi John,

I have also been observing these residual artifacts after rejection in the integrated image, in all of the tests I am doing with your data set. No matter which rejection algorithm, and large-scale rejection works perfectly but does not fix the problem either.

In my opinion what we have here is just reduced SNR on the areas where a lot of satellite trails have been rejected on multiple frames. There are many of these trails over a reduced region of the sky in your frames, and where the number of averaged pixels has been reduced significantly, the integrated image is more noisy, yielding visible artifacts. I'm afraid the only 'solutions' that I devise to fix these artifacts are cosmetic (such as using rejection maps as masks for noise reduction). I'll keep investigating because I still want to make more tests and this is indeed an interesting case.
 
Hello @johnpane

I'm facing the exact same problem you described in message #9 on the exact same target!
exemple.jpg


It is the first time I have some troubles with satellite track rejection and, as I have some difficulty understanding the statistical and mathematical intricacies involved, I was wondering if you finally found a strategy to reject those tracks?
Maybe the only way is to capture more data?

... perhaps you could just persuade Elon Musk to abandon StarLink ...

In this particular case the problematic satellites are not the low altitude constellation satellites but the geosynchronous satellites. They are, in my opinion, much more challenging to reject because they all pass on the exact same place on the sky. Depending on your latitude on earth (because of Parallax) these geosynchronous satellites can be seen between 0° and around +/- 8° of declination (and sadly a the Orion constellation is exactly in this region).
 
Last edited:
I'm pretty sure that the artifact in the image of post #13 is induced by a reflection caused by a bright star outside the field of view (in this case: Beta Orionis, Rigel).

I experienced the same when trying to capture NGC 1909, the witch head nebula with my Takahashi FSQ / Zwo ASI071MC Pro in a 2-tile mosaic. In my case, only the northern tile was affected by the reflection. In my case, the framing was critcal (see https://www.cloudynights.com/topic/685684-uuh-now-that-is-frustrating/ ).

You can try to shadow the light of the bright star (dew shield extension). It may be worthwhile checking the OTA for reflecting parts. If you detect any shining surfaces, these should either be painted with an anti-reflex paint or masked with black velours adhesive foil.

These kinds of artifacts cannot be avoided by dithering and are difficult to remove by image processing.

Bernd
 
I conducted multiple attempts through May 2022 (during which time PixInsight was evolving rapidly), but was not able to remove the trails completely. Here are some notes I took:

Code:
• Was able to reduce satellite trails more with RCR rejection, but at a steep cost in PSFSW and PSFSNR, etc. of integrated result.
• Increasing ESD rejection seems to reject everything but the satellite trails; similar for RCR rejection.
• Default ESD rejection is .3/.05 (outliers/significance)
• Turning off low rejection did not have any effect on satellite trails with ESD .1/.1 or .4/.2
• .1/.05 and .1/.1 are visually indistinguishable but .1/.05 has better PSFSW, PSFSNR, M*
  .1/.05 and .3/.05 are visually indistinguishable except .3/.05 has dimmer halos on brightest stars;
  .1/.05 has better PSFSW, PSFSNR, M* than .3/.05 (greater difference than when comparing .1/.05 to .1/.1)
  .3/.1 further reduces halos, reduces some satellite trails, and has even worse PSFSW, PSF SNR, M*, N*
    ... in sum, .1/.05 seems best for ESD defaults but .3/.1 may be best for this particular dataset.

Although I used metrics like PSFSW, PSF SNR, etc. to evaluate the integrated results, I am unsure that those calculations are valid on integrated images. I asked about this in this post, but did not receive a reply.


I'm pretty sure that the artifact in the image of post #13 is induced by a reflection caused by a bright star outside the field of view (in this case: Beta Orionis, Rigel).
I highly doubt this. The satellites are visible in individual subframes and the artifacts in the integrated results are aligned perfectly with them.
 
I'm pretty sure that the artifact in the image of post #13 is induced by a reflection caused by a bright star outside the field of view (in this case: Beta Orionis, Rigel).
I assure you Rigel is not involved here. Here is a quick movie that demonstrate the problem quite well: https://we.tl/t-ZjJFWlgDE3

I conducted multiple attempts through May 2022 (during which time PixInsight was evolving rapidly), but was not able to remove the trails completely. Here are some notes I took:

Code:
• Was able to reduce satellite trails more with RCR rejection, but at a steep cost in PSFSW and PSFSNR, etc. of integrated result.
• Increasing ESD rejection seems to reject everything but the satellite trails; similar for RCR rejection.
• Default ESD rejection is .3/.05 (outliers/significance)
• Turning off low rejection did not have any effect on satellite trails with ESD .1/.1 or .4/.2
• .1/.05 and .1/.1 are visually indistinguishable but .1/.05 has better PSFSW, PSFSNR, M*
  .1/.05 and .3/.05 are visually indistinguishable except .3/.05 has dimmer halos on brightest stars;
  .1/.05 has better PSFSW, PSFSNR, M* than .3/.05 (greater difference than when comparing .1/.05 to .1/.1)
  .3/.1 further reduces halos, reduces some satellite trails, and has even worse PSFSW, PSF SNR, M*, N*
    ... in sum, .1/.05 seems best for ESD defaults but .3/.1 may be best for this particular dataset.

Although I used metrics like PSFSW, PSF SNR, etc. to evaluate the integrated results, I am unsure that those calculations are valid on integrated images. I asked about this in this post, but did not receive a reply.

Thank you very much John, I will try this as soon as possible and try to balance the cost on my final integration.
 
I highly doubt this. The satellites are visible in individual subframes and the artifacts in the integrated results are aligned perfectly with them.
I referred to nico1038's image shown in post #13, not to your image in post #9.

Bernd
 
I referred to nico1038's image shown in post #13, not to your image in post #9.

Additional clues that these are caused by geosynchronous satellites: each streak is aligned to the declination axis. In @nico1038's image they are about a half-degree north relative to my image. This is due to parallax ... I guess @nico1038's image was taken from a location south of my +40.6 latitude.
 
Back
Top