Doubt about the use of SPFC

Marcelofig

Well-known member
Sorry for my ignorance, but reading the notes about SpectrophotometricFluxCalibration (SPFC) in the latest version of PI released today, it is mentioned that it will complement MARS and will be integrated in ImagerSolver.

But it is not clear to me what it is for, what exactly is its function, how do we use it now, is it a complement, a replacement for SPCC?
 
Hi Marcelo,

SPFC has nothing to do with ImageSolver other than the fact that it requires astrometrically solved images.

SPFC won't complement the MARS-based tools that we'll release. We are using SPFC to process the MARS reference data we are currently acquiring, as well as user-contributed MARS data.

SPFC cannot replace SPCC and has nothing to do with color calibration, although both processes use similar photometric and spectral analysis techniques.

Let me describe what SPFC does in a (hopefully) simple way. Imagine we have two linear images, A and B. Both images have been acquired using the same instruments (telescope, camera, filters, etc.), under identical conditions (transparency, seeing, temperature, etc.), and have been calibrated the same way. The only difference is in the exposure time: A was exposed for 20 minutes, and B was exposed for 10 minutes.

If we measure the flux (pixel values) registered for the same star on A and B, we get two scalars (numbers) that we can call FA and FB. Since the images are linear, we have:

FA/FB = 20/10 = 2

If we repeat the same task for many stars registered on both A and B, we will obtain the same value of 2 for all of them, as long as we exclude saturated stars and too dim stars where uncertainty is too high to acquire reliable measurements.

This FA/FB quotient is what we call a scale factor. It is the quantity that tells us how strong A is compared to B in terms of the intensity of the registered signal. Scale factors are very important because they, along with noise measurements, allow us to compare and weigh images in terms of signal-to-noise ratio. They also allow us to normalize images by making them statistically compatible, which is essential for many analysis tasks, including photometry, statistical outlier rejection, gradient correction based on reference data (such as MARS), and mosaic construction.

If things were as simple as above, calculating scale factors would be trivial: just divide the exposure times. Unfortunately, this is not the case because our assumptions here are unreal. In practice, acquisition conditions vary largely, and many factors can complicate flux and noise measurements and any comparison among images and data sets.

The LocalNormalization tool calculates very accurate and robust scale factors. It calculates them for a set of images with respect to a given image, which is the one selected as the normalization reference. This means that scale factors computed with LN are only valid (and meaningful) for the data set where they have been calculated. You cannot compare two scale factors calculated with respect to different references.

The new SPFC tool also calculates (even more) accurate and robust scale factors, but it does that with respect to the set of spectra provided by the Gaia DR3 star catalog. Since we are working with spectra, the tool needs to know the exact filters we have used to acquire the images and the efficiency curve of the sensor we have used in order to compare photometric measurements with catalog spectral data. Besides its higher accuracy (because we are using a high-quality reference), the huge advantage of this process is that it produces universally comparable scale factors. Since the Gaia spectra reference can be common for all measured images, we can compare SPFC scale factors among images acquired with different instruments, different filters, and even (when applicable) of different regions of the sky. This is a powerful step forward.

I hope this clarifies all of these improtant concepts and helps you understand the significance of the development work we are carrying out.
 
Thanks for this clear explanation.
I assume that variable stars (inherently or because of a companion star) and possibly stars with occulting planets must be removed from the comparaison list, right ?
-- Jean-Marc
 
Hi Jean-Marc,

SPFC calculates robust and efficient scale factors by accurately rejecting outliers in the sets of photometric measurements. For example, here we have a typical SPFC graph:

Screenshot_20240623_123053.png

Note the curved tails in the graphs for each channel. These tails are represented here after outlier rejection (so the actual rejected tails are much larger and worse) and are formed by photometric measurements that don't fit the expected linear response when compared to Gaia DR3/SP data. We perform outlier rejection and efficient estimation of parameters with our implementation of robust Chauvenet rejection (see here and here).
 
  • Like
Reactions: dld
In my view the graphs generated by the SPFC tool are uncommon: I am used to cumulative distribution functions and probability density functions being displayed with the random variable (here: scale factor) as the independent variable (X axis), and the probability (here: number of sources) as the dependent variable (Y axis). The SPFC tool represents it reverse. Shouldn't the axes be interchanged?

Bernd
 
I am used to cumulative distribution functions and probability density functions being displayed with the random variable (here: scale factor) as the independent variable (X axis), and the probability (here: number of sources) as the dependent variable (Y axis).
This may have the general "sigmoid" appearence of a cumulative plot, but it is not; it is a "sorted data" plot. As such it will be monotonic increasing from left to right, but can otherwise be any shape. The "inverse sigmoid" curve just happens to be how this data behaves (it could equally have been linear or "sigmoid" - like a cumulative plot with the maximum gradient near the centre). I find the choice of axes completely natural.

Perhaps some label for the horizontal axis might help - but I'm not sure what ("sorted star index"?)
 
This may have the general "sigmoid" appearence of a cumulative plot, but it is not; it is a "sorted data" plot.
In the red channel, there are 3705 scale values. One could specify this axis as well from 0 to 100 %. In my view this is equivalent to a frequency: about 69 % of the scale values are in the gray region (scale: 2.0e-02 ± 3.5e-03, frequency from 750/4705 = 16 % to 4000/4705 = 85 %).

This resembles a particle-size distribution of a pigment. What is the fundamental difference? In both cases, the random variable is sorted, and its frequency is plotted versus the random variable.

Bernd
 
It is not frequency that is sorted, it is value (of the calculated scale for each individual star). This is simply not a distribution plot - it is a value plot.
 
The primary purpose of these graphs is to help evaluate fitting quality/uncertainty. A description is available by clicking the "?" button at the top right corner. As Fred has pointed out, the graphs show non-rejected scale samples sorted by value in ascending order. Ideally, the graphs would show a straight horizontal line, including all samples. In practice, there is always some slope, and typically, there are two tails of deviating samples. The gray area represents dispersion at the one-sigma interval, and the solid horizontal line indicates the scale estimate.
 
Back
Top