Photometry-Based Color Calibration in PixInsight

Tutorial by Vicent Peris (PTeam/OAUV) and Juan Conejero (PTeam)


In version 1.8.5 of PixInsight we present PhotometricColorCalibration (PCC), a new tool to apply a white balance to deep-sky images based on photometry measurements of stars. In this article we describe the PCC tool, how it works, and some of its distinctive features. We also give some practical usage recommendations.

PCC is a very special tool for several reasons. Besides the quality of our implementation, what really makes PCC unique is the fact that it materializes our philosophy of color in deep-sky astrophotography, and primarily, our philosophy of image processing: astrophotography is documentary photography, where there is no room for arbitrary manipulations without the basic documentary criterion of preserving the nature of the objects represented. In deep-sky astrophotography, we understand color as a means to control the representation of information in the image. Following documentary criteria, such representation must be justified by properties of the objects photographed. This excludes, in our opinion, the classical concept of "natural color" based on the characteristics of the human vision, as applied to daylight scenes.

In the PCC tool, the default white reference is based on the average spectra of Sb, Sc and Sd galaxies. The average of these galaxies provides a source of the entire range of stellar spectral types and populations, so it can be considered as the best unbiased white reference, truly representative of the observed deep sky. Along with this default reference, PCC provides a rich set of precalculated white references, including spiral galaxies, elliptical galaxies, and most stellar spectral types. This allows you to select the most appropriate white reference in special cases, where you decide to maximize the information represented for some particular objects in the image. The possibility to choose one among a rich set of white references is also useful to understand the nature of the data, by analyzing color variations as a function of the properties of the represented objects. In this sense, PCC also materializes our vision of image processing as a creative and enjoyable activity: the how and the why are actually more important than the final product.

The Meaning of Color in Astronomy

The color of an astronomical object is defined by measuring its light intensity in two wave bands. We measure the light passing through two filters, then convert to magnitudes the recorded signals for both filters, and subtract these magnitudes. This difference in magnitudes is what we call color index. The color index measures the ratio of light intensity between two filters covering different bands of the light spectrum: the larger the difference in magnitudes, the larger the ratio and, consequently, the stronger the color.

A well-known color index is the B-V index in the Johnson photometric filter set. In the graph below we can see the spectra of different stars, from the young and very blue O-type stars, to the very red M-type stars.

Some star spectra represented over the Johnson B and V filter bands.

As you can see, the difference in brightness between star types is stronger in the blue part of the spectrum, while this difference tends to a minimum in the green region of the spectrum. The difference in brightness measured through the blue (Johnson B) and green (Johnson V) filters gives us a very effective way to evaluate how cold or warm is the color of a star.

How PCC Works

The goal of PCC is to apply an absolute white balance to the image. Among other important things, this means that the white reference does not need to be present in the picture. This is possible because we know the colors of the white reference in the photometric filter system used by a given star catalog. To calculate these colors, we use spectra templates of a given set of white references and synthesize their color indexes for any given pair of photometric filters.

PCC implements standard color calibration procedures used in astronomy. This calibration methodology is always applied when you want to perform a precise light measurement of a star, since it takes into account the effects of the color of the star in the measurements made for each single photometric filter. Even if you are using a standard set of photometric filters, this calibration allows you to apply finer corrections to the measurements and convert your instrumental magnitudes to standard magnitudes. To apply these corrections, we compare the recorded colors of the stars in a photometric system to the colors of the same stars in a reference catalog. PCC performs these color comparisons to calculate the RGB weights for the target image.

Current versions of the PCC tool use the AAVSO Photometric All-Sky Survey (APASS)[1] star catalog as the reference to calculate white points. The PCC implementation includes a list of color indexes for a set of predefined white references. You can choose one of these references from the White reference drop-down list.

B-V versus r'-V color indexes of several white references, including some galaxies and stars.

The color indexes of these white references have been calculated using data from observational spectral libraries:

The color indexes are calculated using two different pairs of filters used in the APASS catalog:

  • Johnson B minus Johnson V for the blue part of the visible spectrum. This difference is used to calculate the balance between the green and blue filters in the image.
  • Sloan r' minus Johnson V for the red part of the visible spectrum. This difference is used to calculate the balance between the green and red filters in the image.

The three filters used to calculate object colors in the APASS catalog.

Provided that we have the required spectra, we can predict the color of any white reference in this three-filter set. But the key of the calibration process is to compare the colors from the stellar catalog to the colors in the picture. PCC performs two color index comparisons:

  • The blue minus green color index in the image is compared to the Johnson B minus Johnson V color index from the catalog. This allows us to calculate the weight of the blue channel in relation to the green channel.
  • The red minus green color index in the image is compared to the Sloan r' minus Johnson V color index from the catalog. This allows us to calculate the weight of the red channel in relation to the green channel.

These two comparisons usually result in a distribution very close to a linear function, with negligible second-order and higher terms. On the figure below you can see typical graphs generated by PCC.

Interactive PCC graphs show the color data used to calculate the scaling of RGB channels. The upper graph is used to calculate the weight of the red channel in relation to the green channel, while the bottom graph is used to calculate the weight of the blue channel in relation to the green channel. In both graphs, the X axis corresponds to the color index of stars from the catalog, and the Y axis represents the color index of the same stars measured on the image.

Now we have a functional relationship between the color indexes of the star catalog and the image. So if we know the color indexes of the selected white reference in the catalog, we can transform them to color indexes in the image, as shown on the next figure. As noted above, the white reference does not need to be present in the picture.

The white reference represented on the graphs generated by the PCC tool (the average spiral galaxy reference in this particular case).

Since the calculated image color indexes tell us the proportion of light between the blue to green and red to green color channel pairs, we can use them to derive the required RGB weights.

White References

PCC includes a selectable list of white references. The default one is an average spiral galaxy. This white reference has been generated by averaging the spectra of intermediate types of spiral galaxies: Sb, Sc, and Sd. The extreme types Sa and Sdm were not included in this average. This average results in a spectrum close to an Sc galaxy, as depicted below.

The average spiral galaxy white reference, applied by default in the PCC tool.

For the vast majority of deep-sky images, the average spiral galaxy is the best unbiased white reference, in our opinion, in terms of maximization of information representation through color. This is coherent with the documentary criteria that define our deep-sky color calibration methodology. Two good references describing these documentary criteria are the following online articles:

Below you can find some examples. All of the images shown in these examples have received the same histogram stretch and color saturation enhancement after applying PCC, using the HistogramTransformation and CurvesTransformation tools, respectively.

Mouse over:

Uncalibrated color.
After PCC with the average spiral galaxy white reference.

PCC color calibration of M51. Image credits: Vicent Peris, Jack Harvey, Steven Mazlin, Juan Conejero, Carlos Sonnenstein, CAHA, Fundación Descubre, DSA, OAUV.

Mouse over:

PCC with the average spiral galaxy white reference.
PCC with the Sb spiral galaxy white reference.
PCC with the G2V star white reference.

PCC color calibration of M81. Sometimes it can be useful to fine tune the white reference to a specific galaxy type. In this case, we neutralize the color of M81 by establishing the white point to the spectrum of an Sb spiral galaxy. The average spiral galaxy white reference gives a slightly reddish color, while the G2V calibration results in a strong bias towards the red. Image credits: Frank Willburn.

PCC vs. ColorCalibration

Whether you use PCC or ColorCalibration depends on the goal of the color representation in your image. PCC puts into perspective the color of the subject according to a homogeneous and standardized, absolute white reference. On the other hand, ColorCalibration applies a local white referencing by maximizing the tonal representation within the selected white reference pixels.

Mouse over:

PCC applied with the average spiral galaxy white reference.
ColorCalibration applied by using the M74 galaxy as white reference.

Absolute and local color calibrations of M74 performed with the PhotometricColorCalibration and ColorCalibration tools, respectively. Image credits: Vicent Peris, José Luis Lamadrid, Jack Harvey, Steven Mazlin, Oriol Lehmkhul, Ivette Rodríguez, Juan Conejero, CAHA, Fundación Descubre, DSA, OAUV.

Mouse over:

Uncalibrated color.
PCC applied with the average spiral galaxy white reference.

PCC color calibration of M20. This is an example where the ColorCalibration tool may fail because of the warm dominant colors in the local stellar population. Image credits: Stephen Ruhl.

Mouse over:

Uncalibrated color.
PCC applied with the average spiral galaxy white reference.

PCC color calibration of LDN673. Another example where it would be difficult to choose the right white reference with the ColorCalibration tool, because of the strong dominant color of the local star population. Image credits: Vicent Peris, Jack Harvey, CAHA, Fundación Descubre, OAUV.

Mouse over:

Uncalibrated color.
PCC applied with the average spiral galaxy white reference.

PCC color calibration of NGC6164. PCC corrects the slight green misbalance in the stars of this picture. Image credits: Thomas Maca, Gerhard Bachmayer, Gerald Wechselberger.

Mouse over:

Uncalibrated color.
PCC applied with the average spiral galaxy white reference.

A close-up of NGC6164 on the same image shown in the previous example. Please always remember that you don't need to have the white reference object represented in your image. The colors of the white references are already known, and the RGB weights are calculated through photometry to neutralize the selected white reference in your picture. Image credits: Thomas Maca, Gerhard Bachmayer, Gerald Wechselberger.

Mouse over:

Uncalibrated color.
PCC applied with the average spiral galaxy white reference.

A wide-field image of the Milky Way. Image credits: Georg Viehoever.

PCC Specific Features

In this section we describe a number of specific features that characterize the current implementation of the PCC tool. Knowing these features is important to fully understand and value the quality of our implementation and its achieved results.

Aperture Photometry

The core of PCC is the AperturePhotometry script, designed by PTeam members Andrés del Pozo and Vicent Peris. This photometry script has some important features being used in the current PCC implementation:

  • The AperturePhotometry script accepts a list of files for an automated analysis of a series of images. A text-based photometry file is then generated with the measurements of all stars detected in all of the images. This allows PCC to execute AperturePhotometry once to analyze the three RGB channels and then collect the photometry of each channel in a single step.
  • The AperturePhotometry script automatically calls the ImageSolver script to calculate an astrometry solution for the image.
  • AperturePhotometry and ImageSolver have a special, non-interactive working mode, allowing for a completely automated and unattended solution. With a given set of initial parameters, the astrometry and photometry can be run without showing the graphical interfaces of both scripts.
  • In PCC, the local sky background level for each star is calculated using multiscale analysis tools. This allows us to perform a precise sky level subtraction. As opposed to the classical aperture photometry method, we calculate the local sky background level including the pixels over the star, not just within an annulus around it, since the multiscale algorithms allow for a very accurate subtraction of the stars. The multiscale median transform algorithm[6] [7] is used in the current script implementations.

Mouse over:

Original image.
Local background model computed with the multiscale median transform.

An example of local background calculation with the current PCC/AperturePhotometry implementation.

Hybrid C++/JavaScript implementation

PhotometricColorCalibration is the first tool making use of a new feature implemented on the PixInsight/PCL development platform since PixInsight core version 1.8.5: the possibility to execute and evaluate JavaScript source code directly from modules written in the C++ programming language.

PixInsight has a modular architecture where image processing and file format support capabilities are implemented as external, installable modules. This modular system uses a client-server organization: the PixInsight core application implements the server part, and all modules communicate with the core through a low-level application programming interface (API), implemented as a set of C language callback functions and data structures. Since the core API uses the standard C calling convention, a PixInsight module could in principle be written in virtually any programming language. The PixInsight Class Library (PCL) is a large set of C++ classes for development of PixInsight modules using a high-level, object-oriented, multiplatform framework.

Along with the PCL/C++ framework, PixInsight has always provided the PixInsight JavaScript Runtime (PJSR) integrated in the PixInsight core application. Besides a tight integration of JavaScript scripting with core resources, including the graphical user interface, PJSR provides nearly the same image processing functionality as the PCL/C++ framework, along with the possibility to execute all processes and use all image file formats defined by all installed modules. This makes PJSR an extremely powerful and versatile programming environment in PixInsight, and has made possible the existence of a large, rich and ever-growing set of JavaScript scripts included in the standard PixInsight distribution.

While any installed process has been able to be executed from JavaScript since early versions of PixInsight, the opposite has not been true until recently: PCL/C++ modules were unable to communicate with the JavaScript runtime. This has changed completely since version 1.8.5. The possibility to execute JavaScript source code from PCL/C++ modules, including existing code and dynamically generated code, opens a door to exciting new implementations and development lines, and makes possible to reuse the large base of excellent tools already implemented in JavaScript. PCC does exactly this with the ImageSolver and AperturePhotometry scripts.

Linear Fitting

As described above, the set of color indices computed for stars in the image and the color indices retrieved for the same stars from a photometric catalog fit almost linearly, with negligible second order terms in most practical applications of PCC. Obviously, a really good linear fitting routine is a critical component of the whole color calibration process. Contrary to what may be thought without an in-depth analysis, the choice of a linear fitting algorithm is not trivial in the case of PCC.

As in any real-world application of image processing and analysis algorithms, the data PCC has to deal with is, to put it softly, less than ideal. The main problem is, as you probably have already imagined, outliers, and the only solution is, as an informed reader already knows, robust statistics. In the case of PCC we have two different and uncorrelated sources of outlier data: the input image and the photometric catalog. High noise, calibration errors, and artifacts such as strong light pollution gradients, contribute to degrade the accuracy of measurements on the image. As clearly shown by the graphs included in this article, the employed photometric catalogs, despite being excellent works in every way, also have errors that cannot be overlooked. Without a robust implementation, all of these factors would make the PCC task unfeasible.

We perform the fitting task in two stages. The image-catalog pairs of color indexes are first filtered by a one-step robust rejection procedure. We use the Qn scale estimator of Rousseeuw and Croux[4] to reject all data pairs where at least one of its components is located more than three sigmas from the median of the distribution. The distribution of values in these vectors is quite asymmetric in general. The Qn estimator does not depend on a previous estimate of location and computes statistical scale from pairwise differences, so it is robust to skewed distributions.

After the initial rejection, we fit a linear function to the surviving color index pairs. Linear regression routines are typically employed by most implementations facing similar problems. Instead of a least squares fit, we have implemented a robust line fitting algorithm based on minimization of absolute deviation.[5] This algorithm is already being used in several standard PixInsight tools, with excellent results in terms of resilience to outliers and fitting accuracy.

Interactive Graphs

No data modeling tool should be implemented without resources to facilitate a critical evaluation of the obtained results. In the case of PCC, along with robust estimates of dispersion in the fitted functions, we provide graphical representations of the data points used by the color calibration process and the fitted linear function.

These graphs are very useful to analyze the quality of the performed color calibration task. On each graph, the X axis corresponds to color indexes found in the catalog, while the Y axis shows color indexes calculated from image pixel values. Each dot corresponds to a measured star, and the straight line is the fitted white balance function. Outlier stars can be easily identified as dots that depart considerably from the fitted linear function. The more grouped all dots are around the fitted line the better, but as we already have described, PCC's linear fitting routines have been designed to be very robust, so these outliers, even in the most unfavorable practical cases, should not degrade the quality of the color calibration process under normal working conditions.

Robust line fitting is critically necessary when the data points used to evaluate the white balance function show considerable dispersion and a large proportion of outliers. Fitting straight lines to the point clouds shown on these graphs is not the easiest task. The robust line fitting routine implemented in our PCC tool does the job remarkably well.

Top: Interactive graphs, showing the difficulty of this test case.

Middle: Before applying PCC.

Bottom: After PCC with the average spiral galaxy white reference.

PCC graphs are generated dynamically as interactive HTML5 documents using the excellent Dygraphs JavaScript graphing library. HTML5 renditions are performed on WebView PCL controls, where the underlying implementation in the PixInsight core application uses Qt's QWebEngineView class, which in turn is based on Google's Chromium web browser. The graphs allow user interaction with zooming and panning capabilities.

Usage Notes

Image Requirements

Keep in mind that this tool has been designed to work with linear images that have been acquired through filters with similar passbands as those used by the APASS catalog. This means mostly broadband RGB filters. We cannot expect any robust color representation when using narrowband filters, or filters located in the UV or IR wavelength ranges. Please do not attempt to use PCC once the image has been stretched.

The images should also be completely and correctly calibrated, and flat-field correction is very important. Results will be also more robust if the image is free from gradients, especially if the gradients are multiplicative because they affect the brightness and color of the stars at a local level.

Online Services

PCC uses the online Vizier service, or one of its mirror servers, to retrieve star coordinates. As happens with any online service, one or more of these servers can fail from time to time. When this happens, usually PCC gets garbage or unusable data and ends throwing an error message. Most of the times, the only solution is just waiting for the server to work properly again. You can also try selecting an alternate mirror server from the corresponding drop-down list available on PCC.

Another common problem is the time required to download star catalog data, which can be quite long, even more than 10 minutes, for very wide fields. This happens because the online databases are slow serving data for bright stars distributed over a large region of the sky. As a reference, the Milky Way picture in the examples shown above took 15 minutes to calibrate.


The ImageSolver script is not a blind solver. This means that it needs an initial set of parameters to work. You should know the approximate focal length, pixel size and coordinates of the center of the image. If you don't know the coordinates of the object, press the Search Coordinates button and enter the object's name or identifier to retrieve them from the Internet.

You don't need to input the exact focal length or pixel size of your imaging train, but you need approximate values. For instance, an input focal length of 1000 mm while the real being only 500 mm may cause the tool to fail. Please keep in mind that you absolutely need the astrometric solution if you want to run the photometry of the image.

In very wide fields you could need to tweak the astrometry parameters. As a reference, these were the settings used to solve the astrometry for the Milky Way picture used in the examples shown above:

Image Parameters section:

  • The coordinates were set to those of M11, an open cluster near the center of the image.
  • The Focal length parameter was set to 12 mm.
  • The Pixel size parameter was set to 5.8.

Plate Solving Parameters section:

  • The Distortion correction checkbox was checked.

Advanced Plate Solving Parameters section:

  • The Projection system parameter was set to Stereographic, which is usually a better approximation to the distortions present in wide-field images.
  • The Noise reduction parameter was set to 2 because the image was very noisy. The high noise caused the StarAlignment tool to fail because a big amount of noise structures were detected as stars. Always consider increasing this parameter when you have a noisy image and the astrometry solver fails.
  • The Alignment device parameter was set to Polygon matching.

If you are using drizzle, please remember to change the focal length or the pixel size according to the Scale parameter value used in DrizzleIntegration. For instance, if using a Scale value of 2, you should then multiply the focal length by 2 or divide the pixel size by 2.

The Saturation Point

It is very important to avoid measuring saturated stars. The Saturation threshold parameter should always be below the saturation level of the three RGB channels. It is easy to check where the saturation point is by reading out the pixel values over the saturated stars. It can be helpful to disable the screen stretch with the ScreenTransferFunction tool. For example, consider the case shown in the following screenshot:

The lowest saturation point in this case is around 0.37. The right Saturation threshold value should be a bit lower; a value of 0.3 would work for this image. In this way, we can avoid measuring the color of any saturated star in the image.

Background Neutralization

In previous versions of PixInsight, the color calibration and the background neutralization were always performed in two separate steps. PCC allows you to perform both processes in the same process execution. There are several considerations when configuring this section of PCC:

  • It is important to have a clear look at the background of the picture. In some cases the image can have a strong background color cast. In such case, please apply this workflow with the ScreenTransferFunction tool:
    • Disable the Link RGB Channels option.
    • Press the Auto Stretch button to stretch the image again. Now it will be easier to look at the sky background areas in the image.
    • Create a preview over a sky background area.
    • Copy the preview's coordinates to the PCC tool with the From Preview button, or by dragging the preview's view selector to that button.
    • Apply the PCC tool.
    • To get a correct screen rendition of the image after the color calibration process, enable the Link RGB Channels option in the STF tool, then press the Auto Stretch button.
  • Always check the sky background level of the image. This can be done by simply reading out the image with the mouse and checking the pixel values on the bottom bar. The values shown should be lower than the Upper limit parameter value. Take into account that the sky background can be much higher for cameras with a high gain, or under strong light pollution conditions. If the current sky background level is above the Upper limit value, no sky background correction will be applied.


[1] Henden et al. (2016), AAVSO Photometric All Sky Survey (APASS) DR9. APASS website.

[2] A. J. Pickles (1998), A Stellar Spectral Flux Library: 1150–25000 Å. Publications of the Astronomical Society of the Pacific, vol. 110, No. 749, pp. 863–878

[3] Polletta et al. (2007), Spectral Energy Distributions of Hard X-Ray Selected Active Galactic Nuclei in the XMM-Newton Medium Deep Survey. The Astronomical Journal, 663, pp. 81–102

[4] P.J. Rousseeuw and C. Croux (1993), Alternatives to the Median Absolute Deviation, Journal of the American Statistical Association, Vol. 88, pp. 1273–1283.

[5] William H. Press et al. (2007), Numerical Recipes, The Art of Scientific Computing, 3rd Ed., Cambridge University Press, § 15.7.3, pp. 822–824

[6] Starck, J.-L., Murtagh, F. and J. Fadili, A. (2010), Sparse Image and Signal Processing: Wavelets, Curvelets, Morphological Diversity, Cambridge University Press.

[7] Barth, Timothy J., Chan, Tony, Haimes, Robert (Eds.) (2002), Multiscale and Multiresolution Methods: Theory and Applications, Springer. invited paper: Jean-Luc Starck, Nonlinear Multiscale Transforms, pp. 239–279.