Author Topic: SFS Generated Weight vs Image Weighting Built Into Image Integration Process?  (Read 1188 times)

Offline Terry Danks

  • PixInsight Addict
  • ***
  • Posts: 137
Is there really a significant advantage to using weighting expressions calculated by the SFS and written into the fits headers over the weighting the Image Integration Process does on its own when noise evaluation is active?

My concern: It seems there is a possibility that any expression I formulate to weight subs may well be less sophisticated than those used by the far more competent programmers who wrote the Integration Process in the first place, opening the likelihood of my doing more harm than good?

Offline RickS

  • PTeam Member
  • PixInsight Jedi
  • *****
  • Posts: 1298
Whether there is a significant advantage depends on what you're trying to accomplish.  If you want optimal SNR then ImageIntegration noise weighting will provide that.  You can manage FWHM, Eccentricity, etc. to a limited degree by deciding which subs to include and exclude.  If you want greater control over the balance of SNR with other quality metrics then SFS weighting will allow you to achieve that.

You can't really do any harm by trying different options.  I normally do a simple noise weighted integration as well as at least one SFS weighted version.  I compare the results (also with SFS) and pick the version that gives me what I consider the best compromise.  Sometimes I'll do further tweaking of the weighting expression and/or make changes to the subs that I include in the integration.  I may even combine multiple integrations with PixelMath, e.g. blending the bright parts of a high FWHM integration and the dim areas of a maximum SNR integration.

Data is precious and hard to come by.  IMO, it's worth putting some effort into the preprocessing stage to maximize the value of what you have.

Cheers,
Rick.

Offline astrovienna

  • PixInsight Addict
  • ***
  • Posts: 123
    • The Hole in the Trees Skybox
Rick and Terry, I just noticed this thread after updating this one on a similar subject:

https://pixinsight.com/forum/index.php?topic=13123.0

In short, registration edges in my images (caused by dithering and slight centering variations from night-to-night) seem to dramatically throw off quality estimates, even though the registration errors are quite small.  I assume this problem also applies to the noise evaluation weightings done in the ImageIntegration process, and if so, that seems to mean that weights must be done on unregistered images in SFS.  If SFS isn't used, the weightings aren't done until ImInt, and since they're done on the registered images, they'll be wrong.

Have you run into this problem with registration edges?

Kevin

Offline RickS

  • PTeam Member
  • PixInsight Jedi
  • *****
  • Posts: 1298
Hi Kevin, I only ever calculate SFS weights on unregistered subs on the basis that the interpolation done by registration is going to reduce the accuracy of FWHM calculations.  I haven't noticed any problems with noise evaluation in ImageIntegration being confused by rough edges but I have never looked closely at this.

Cheers,
Rick.

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
I detect some conceptual mistakes here, which require rigorous information to be sorted out. Just a few important facts that should be pointed out:

- Black regions caused by registration of frames cannot alter summary statistics computed by ImageIntegration, since this tool uses robust statistics methods. For example, with the only exception of the average deviation from the median, the rest of scale estimators used by ImageIntegration (MAD, BWMV, PBMV, Sn, Qn, IKSS) are robust. Despite the fact that it cannot be considered as robust strictly, even the average deviation, as implemented in II, is very resilient to outliers because it performs a trimmed average.

- Noise estimates are computed using statistically robust algorithms. Again, these estimates are immune to black regions caused by image registration.

- If our preprocessing tools and scripts are used correctly, noise estimates are always computed from unregistered, uninterpolated pixel data, just after image calibration. Noise estimates are calculated either by the ImageCalibration process (for non-CFA data) or by the Debayer process (for data mosaiced with a CFA). Both tools store noise estimates as image properties and private FITS header keywords. The ImageIntegration tool reads these metadata items, if they exist, and uses the corresponding values to generate robust and accurate noise-based image weights.

- If the data have not been preprocessed correctly in PixInsight, then the images don't have valid noise estimates stored as metadata. When this happens, noise evaluation has to be performed directly on the data loaded by ImageIntegration. In such case the noise estimates cannot be rigorous because image registration interpolates pixel data. Pixel interpolation acts like a variable low-pass filtering process that smoothens the image and generates aliasing artifacts, especially when registration has to correct for small rotation angles, as happens in most practical situations. With interpolated data, noise estimates cannot characterize well the raw data. For example, an image used as registration reference has not been interpolated (just calibrated) when it arrives to ImageIntegration. If no noise metadata is available, its noise estimate will be much higher than the estimates calculated for registered frames, which does not reflect what actually happens in the original data set. While these non-rigorous estimates are usually better than nothing, you cannot expect the same quality in the final result.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline astrovienna

  • PixInsight Addict
  • ***
  • Posts: 123
    • The Hole in the Trees Skybox
A combination of conceptual mistakes and user errors, no doubt!  :)  But your patient explanations help to reduce both.  Thanks for pointing out that noise estimates are coded into the metadata in calibration, so they don't change after registration.  I didn't know that.

However, from checking the process console, I see that the weight assigned by ImInt does change between unregistered and registered frames.  For example, an unregistered frame weighted <1 may be weighted >1 after registration.  So a frame may go from a relatively high weighting to a low weighting after registration.  IIUC, that means that pixel interpolation done in registration somehow affects the MRS noise evaluation algorithm.  It's very safe to say that the details of that algorithm are VASTLY over my head, so just the bottom line question:  will this matter to the final result? 

BTW, I'm looking into these details mostly because I'd like to do registration in BPP, since that saves several steps over StarAlign.  It looks like I can't do that for images where I want to assign a custom weight in SFS, but often I don't feel any need to do that.

Kevin

Offline astrovienna

  • PixInsight Addict
  • ***
  • Posts: 123
    • The Hole in the Trees Skybox
In case you want to see the actual numbers, here are a few frames that I compared, with their weightings extracted from the process console.  Columns are Frame #, Weight of the Unregistered Frame, and Weight of the Registered Frame.  Note the very high value for Unreg Frame 4.  I can't explain that - it looks just like the rest of the frames.

Frame #   UnregWeight   RegWeight
 1   1               1
 2   0.99856   1.00160
 3   0.99818   0.98555
 4   1.23653   0.98929
 5   0.99619   0.98466
 6   0.99756   0.99553
10   0.98391   0.98320
11   0.99218   0.98810
12   0.99057   0.98818
13   0.99521   0.99354
15   1.00489   1.00045
16   1.00518   1.00114
17   1.00577   1.00009
18   1.00651   1.00073
19   1.00464   0.99730
20   1.00594   0.99884

Offline drmikevt

  • PixInsight Addict
  • ***
  • Posts: 112
Could it be that the variations that he is reporting are due to differences in the weighting in reference to the reference image?  I think that maybe the weightings are in reference to the first image, or reference image, of the stack.  If the first image in the list we different for the 2 runs, then it is like comparing apples to oranges. 

Can someone please let me/us know if that is correct - that the weightings are dependent on the first image in the list of Image Integration?

Mike

Offline astrovienna

  • PixInsight Addict
  • ***
  • Posts: 123
    • The Hole in the Trees Skybox
It's the same image in both runs, but in the second run it's been registered.  The registration frame itself isn't in this stack.

Offline astrovienna

  • PixInsight Addict
  • ***
  • Posts: 123
    • The Hole in the Trees Skybox
I analyzed integrations based on both pre- and post-registration weightings, and found that the integration based on frames weighted pre-registration had a slight edge in both FWHM and SNR.  I used 25 calibrated (bias, flats, darks) 2 minute green frames from a 694-chipped camera for this.  I made two copies of these frames.  In copy one, I weighted the frames pre-registration with the following expression:


(25*(1-(FWHM-FWHMMin)/(FWHMMax-FWHMMin)) + 0*(1-(Eccentricity-EccentricityMin)/(EccentricityMax-EccentricityMin)) + 25*(SNRWeight-SNRWeightMin)/(SNRWeightMax-SNRWeightMin))+50
 

(That's the weighting I've been using lately - I haven't really tried out weighting for eccentricity yet, so that's why it has a zero weight assigned.)  I then registered and stacked those frames, using the SFS-assigned weight in Image Integration.

In set two, I weighted after registration, and then stacked the same way with Image Integration.  I also ran ImInt a third time, just using noise evaluation as the weighting, so that the SFS-assigned weights were ignored. 

I ran the three integrated images through Subframe Selector to determine FWHM and SNRWeights.  The results are attached.  The FWHMEccentricity and NoiseEvaluation tools in PI gave essentially identical readings.

Kevin

Offline astrovienna

  • PixInsight Addict
  • ***
  • Posts: 123
    • The Hole in the Trees Skybox
Here are the results of a large data set, with 164 frames.  These were all pretty good frames, as I culled 30 others that showed some haze.  I weighted using the same expression as before, but I used more tools to compare the frames this time.  The results with the SubframeSelector estimates are pretty much the same as the first test.  With the Image Analysis scripts, CBNR and FWHMEccentricity were consistent with SubframeSelector's estimates.  The NoiseEvaluation script showed all three methods were nearly identical, and the SNR Estimates from Process Console agreed with that.  The Median Noise Reduction estimate, though, showed that pre-registration weighting was best.

So is the difference noticeable to the eye?  Not to mine, but you be the judge.  The three integrated frames are here:

https://www.dropbox.com/s/sl7np9r2t9xtdty/Weighting%20Comparison%20Frames.zip?dl=0

I think the bottom line is unchanged from my initial conclusions:  the effect of weighting is no more than 1-2%, at least with decent quality subframes, and may be lower than that.  I'd still be very interested in seeing results from others.

Kevin

Offline RickS

  • PTeam Member
  • PixInsight Jedi
  • *****
  • Posts: 1298
Hi Kevin,

I have seen some quite significant improvements in FWHM from SFS weighting but I normally use quite an aggressive formula:

80*(1/(FWHM*FWHM)-1/(FWHMMax*FWHMMax))/(1/(FWHMMin*FWHMMin)-1/(FWHMMax*FWHMMax))
      +15*((SNRWeight-SNRWeightMin)/(SNRWeightMax-SNRWeightMin))
      +5*(1-(Eccentricity-EccentricityMin)/(EccentricityMax-EccentricityMin))

Cheers,
Rick.

Offline RobF2

  • PixInsight Addict
  • ***
  • Posts: 189
  • Rob
    • Rob's Astropics
What a great thread.  Rick, I particularly appreciate you explaining when SFS might be useful for integration weighting based on additional non-noise parameters. 

Using a chinese mount as my imaging platforms invariably means there are times I have wonderfully tight round stars, and frustrating times when that most definitely not the case for some subs (even if atmospheric conditions remain similar). 

I wasn't sure where to start with formulae for SFS, but again, some great insights and explanations here...
FSQ106/8" Newt on NEQ6/HEQ5Pro via EQMOD | QHY9 | Guiding:  ZS80II/QHY5IIL | Canon 450D | DBK21 and other "stuff"
Rob's Astropics