About our color calibration methodology

Juan Conejero

PixInsight Staff
Staff member
Hi everybody,

Recently we have read some interesting discussions about color calibration on deep sky images. In some of these discussions we have seen how the tools that we have implemented in PixInsight, particularly the BackgroundNeutralization and ColorCalibration tools and their underlying methods and algorithms, have been analyzed on the basis of quite inaccurate descriptions and evaluations.

As this has been a recurring topic for a long time, I have decided to write a sticky post to clarify a few important facts about our deep sky color calibration methods and, in a broader way, about our philosophy of color representation in deep sky astrophotography.

In first place, let's describe how our two main color calibration tools work. Fortunately, both tools are now covered by the new PixInsight Reference Documentation, so we can link to the corresponding documents to save a lot of descriptions here:

Official documentation for the BackgroundNeutralization tool.

Official documentation for the ColorCalibration tool.

The documentation for ColorCalibration is still incomplete (it lacks some examples and figures), but it provides a sufficiently accurate description as to understand how it actually works.

As you can read on the above document, ColorCalibration can work basically in two different modes: range selection and structure detection. In range selection mode, ColorCalibration allows to select a whole nearby spiral galaxy as a white reference—not the nucleus of a galaxy, as has been said. This method has been devised by Vicent Peris, who has implemented and applied it to process a number of images acquired with large professional telescopes. Here are some images for which this calibration method has been applied strictly:


We are working—time permitting—on a formalization of this color calibration method, and Vicent is also working on a survey of reference calibration galaxies covering the entire sky. However as you know both my work load and Vicent's professional work are huge so don't expect this being published very soon.

What happens when there are no galaxies on the image? The above linked NGC 6914 image is a good example. In these cases we simply use the calibration factors computed for a reference galaxy acquired with the same instrument, and apply them to the image in question, taking into account the difference in atmospheric extinction between both images.

The other working mode of ColorCalibration—structure detection—can be used to select a large number of stars to compute a white reference. This is a local calibration method (valid only for the image being calibrated) that has many potential problems. For example, one must be sure that a sufficiently complete set of spectral types is being sampled, or the calibration factors can be strongly biased. When this method is implemented carefully, it can provide quite good results though. Here is an example:

Before ColorCalibration (working in structure detection mode):

After ColorCalibration:

Once we have seen how our tools actually work and how they can be used in practice, let's talk a little about the underlying color philosophy. Common wisdom on color calibration methods for deep sky images relies on considering a particular spectral type as the white reference. In particular, the G2V spectral type—the solar type—is normally used as the white reference. Our color calibration methods don't follow this idea.

Why not rely on any particular spectral type for color calibration? Because we can't find any particularly good reason to use the apparent color of a star as a white reference for deep sky images. A G2V star is an ideal white reference for a daylight scene, in case we want to take the human vision system as the underlying definition of "true color". For example, a G2V star is a plausible white reference for a planetary image. The reason is that all objects in the solar system are either reflecting or radiating the sun's light. So if your image sensor has linear response to incident light and you apply a linear transformation to the image such that a G2V star is rendered as pure white, then the planetary scene will be rendered in true daylight color, or just as it would be seen directly by a human.

In a deep sky image however, no object, in general, is reflecting light from a G2V star. Deep sky images are definitely not daylight scenes, and most of the light that we capture and represent in them is far beyond the capabilities of the human vision system. We think that using a G2V star as a white reference for a deep sky image is a too anthropocentric view. We prefer to follow a completely different path, starting from the idea that no color can be taken as "real" in the deep sky, on a documentary basis. Instead of pursuing the illusion of real color, we try to apply a neutral criterion that pursues a very different goal: to represent a deep sky scene in an unbiased way regarding color, where no particular spectral type or color is being favored over others. In our opinion, this is the best way to provide a plausible color calibration criterion for deep sky astrophotography, both conceptually and physically.

In this way we try to design and implement what we call spectrum-agnostic or documentary calibration methods. These methods pursue maximizing information representation through color in an unbiased way. In Vicent's calibration method, we take the integrated light of a nearby spiral galaxy as white reference. A nearby spiral galaxy with negligible redshift and good viewing conditions as seen from Earth is a plausible documentary white reference because it provides an excellent sample of all stellar populations and spectral types. Each pixel acquired from a galaxy is actually the result of the mixture of light from a large number of different deep sky objects.

In its structure detection mode, the ColorCalibration tool can be used to sample a large number of stars of varying spectral types. This can also be a good documentary white reference—if properly applied—because by averaging a sufficiently representative sample of stars we are not favoring any particular color.

In some cases we have also implemented special variations of the galaxy calibration method. Vicent Peris processed one of the first deep fields pertaining to the ALHAMBRA survey, acquired with the 3.5 m telescope of Calar Alto Observatory:


This is the high-resolution image combining 14 of the 23 filters of ALHAMBRA, from 396 nm to 799 nm:


This image shows an extremely deep field. In fact, almost all of the objects that you can see in this image are galaxies, despite they may seem stars. This image poses a big problem in terms of color rendition due to two unique conditions: (1) it integrates light from 14 different filters and (2) the main goal with this image is maximizing representation of distances. Basically, it is very important to clearly differentiate between the most distant galaxies, represented as extremely red objects, and the relatively close ones, represented almost in what we can recognize as "true color", in terms of most "normal" galaxy images.

So instead of using a nearby spiral galaxy and applying the resulting calibration factors—which on the other hand is impossible in this case because no such image exists acquired with the same instrumentation—, the white reference was taken as the average of some of the largest (and hence closest) galaxies in the image. Even if these galaxies can have some significant redshift, the resulting white reference leads to a local calibration procedure that maximizes the representation of object distances through color, which is the main documentary goal of this image.
 
You couldn't have been more clear with this post Juan. Wonderful explanation and I totally agree with this method!
Thank you !

 
I don't see anything wrong using what you define as an "anthropocentric view".
And I don't see anything wrong using the methods you propose either.
They're just methods and each will yield different results.

The problem, as I see it, is not in what method is being used but in believing that using this or that method will give us a more accurate, or neutral image.
G2V advocates tend to believe the G2V method works best.
You believe your method is the best too.

To me, these are just different interpretations and presentations. Each one has a place. And I don't think neither has better or worst documentary value, using the word documentary the way I think you understand it.

If Pepito took an image and used the G2V method, great, we know how the colors look using that method.
If Juanito used the "structure detection" method, great, we know how they look using that method.
Neither is showing me the REAL color anyway, right? So I might as well enjoy a view from different perspectives.

I'm flexible and known to be wrong most of the time though ;)

 
They're just methods and each will yield different results.

Image processing is all about methods and interpretation of the data. My comment on anthropocentrism has probably been somewhat excessive. I just wanted to stress the fact that G2V applied to deep sky astrophotography is ?in our opinion? an extrapolation of premises and concepts that are strictly valid to a human observer under sunlight conditions, and hence incur an important lack of generalization when applied to representations of objects and environments that are very different from sunlight scenes.

The problem, as I see it, is not in what method is being used but in believing that using this or that method will give us a more accurate, or neutral image.

Accuracy and suitability are two different, not necessarily related concepts. The problem is not with accuracy, which is always a good thing in my opinion, although not necessarily the most useful property of a solution. For color calibration in deep sky astrophotography accuracy is much less important than coherence with the nature and physical properties of the objects represented. And equally important is having a solution able to fulfill the aesthetic and information maximization goals that characterize the astrophotographic work. In this sense, we think that our methods are more comprehensive and efficient.

To me, these are just different interpretations and presentations. Each one has a place. And I don't think neither has better or worst documentary value, using the word documentary the way I think you understand it.

Well, of course everybody is free to view these topics in his/her own way and apply whatever methods and tools. Following your argument to its logical conclusion, everything has one place in the world. I agree with that idea completely, especially if we are talking about opinions, personal preferences, tastes, beliefs, etc., and as far as one respects the same things of everybody and doesn't try to force anybody to do something.

We are talking here about paradigms of color calibration in astrophotography. Note that what is really important is not the methods by themselves, but the paradigms behind them and how they reflect our understanding of astrophotography, of its purpose, and of the objects that we are representing. So the difference is more conceptual than technical. Both paradigms ?the G2V one and our spectral type agnostic paradigm? are conceptually very different, so different that I don't think one can stay equidistant from both. But again I respect your view and if you want/can stay equidistant then that's your choice, although I disagree.

Regarding documentary value, we think that our methods are better because they pursue maximization of information representation through color. By not favoring any particular spectral type as a white reference, all objects are represented according to their physical properties in an unbiased way. For example, with our color calibration methods those objects with larger radial velocities are redder and young stellar populations are bluer, and these differentiations are uniformly maximized across the representable spectrum.

If Pepito took an image and used the G2V method, great, we know how the colors look using that method.
If Juanito used the "structure detection" method, great, we know how they look using that method.

As noted, what is important is not "how they look" but the concepts and philosophical considerations behind the different methods. Once more, we are conferring more importance to the why and how than to the final results.

Neither is showing me the REAL color anyway, right?

We think that there is no such thing in deep sky astrophotography. We don't favor any spectral type as a white reference precisely for that reason. Astrophotography is always an interpretation of reality. There are different interpretations but that doesn't mean that all interpretations are equally valid or have the same value for each of us ?that's great, since the world would be boring otherwise! ;)
 
I strongly believe that we (humans) are the only form of life in the universe with enough spare time to care about taking images of the night sky. So an anthropocentric representation of the universe makes sense! I don't expect that my images are viewed by aliens or dogs...

I use the color calibration tool in PI because it is easy to use and it produces anthropocentric pleasant colors... Not because it produces accurate and unbiased data.

By the way, I jumped into the PI wagon for the color calibration tool. Before I tried many methods with bad results, but the Color Calibration tools is magic!

Cheers,

Jose
 
Hi all,

I want to add something to this discussion, as I am the father of the "spiral galaxy" color calibration method. I just did a direct comparison between my method and the G2V one. I did the comparison with data from my friends Ivette Rodriguez and Oriol LehmKuhl. The selected target is a widefield view of the Leo Triplet. The original image looks very red mainly due to the background sky bias:

leotriplet1.jpg


Below you have the image after running eXcalibrator with Sloan data. eXcalibrator used 12 stars and I get very low dispersion values (0.02 in G and 0.03 in B). The resulting RGB weights are 1, 0.746 and 0.682. I also removed the background sky bias:

leotriplet2.jpg


And below you have the calibrated image removing the background sky bias and taking M66 as white reference. RGB weights were 1.055, 1.1 and 1.

leotriplet3.jpg



My method takes as white reference a spiral galaxy because I consider it as a representative object of the Universe itself. On one hand, galaxies are the bridge between small and large scale Universe structures. On the other hand, spirals are the type of galaxy with a more balanced representation of all the stellar populations. In fact, that's why it works: you will always see hot stars as blue dots, and cold stars as redder ones.

Take into account that this calibration has been done on THIS galaxy. I'm working on this topic to define a "standard" spiral galaxy. So more solid numbers will be given in the upcoming months.



Best regards,
Vicent.
 
Your last image definitely gives a much more anthropocentric view of this famous trio, Vicent  >:D >:D >:D

Anyway, here's an image of this same target that allegedly used eXcalibrator for the color balance:
http://afesan.es/user/cimage/TRIO-FINAL-PUBLICAR--4.jpg

Vicent, it's from one of our compatriots, Antonio F.Sanchez. Pretty anthropocentric as well.

Anyway, there are many people using that tool and creating some very "anthropocentric" images. And I'm not defending the tool, just stating a fact.
Posting one with the colors so terribly balanced as you just did (from this very anthropocentric human perspective) might diminish your credibility rather than the "accuracy" (or lack thereof) of eXcalibrator.
Be careful.
 
RBA said:
Your last image definitely gives a much more anthropocentric view of this famous trio, Vicent  >:D >:D >:D

"anthropocentric" images. And I'm not defending the tool, just stating a fact.
Posting one with the colors so terribly balanced as you just did (from this very anthropocentric human perspective) might diminish your credibility rather than the "accuracy" (or lack thereof) of eXcalibrator.
Be careful.

I did not think Vicent Excalibrator image was balanced that much differently.  It still has redder tone.
Certainly, Antonio's image looks better but is fully processed.

Max

 
mmirot said:
RBA said:
Your last image definitely gives a much more anthropocentric view of this famous trio, Vicent  >:D >:D >:D

"anthropocentric" images. And I'm not defending the tool, just stating a fact.
Posting one with the colors so terribly balanced as you just did (from this very anthropocentric human perspective) might diminish your credibility rather than the "accuracy" (or lack thereof) of eXcalibrator.
Be careful.

I did not think Vicent Excalibrator image was balanced that much differently.  It still has redder tone.
Certainly, Antonio's image looks better but is fully processed.

Max

Hi,

in fact, if you load the image in PixInsight and apply a saturation curve, you will see that the gals have similar reds as my G2V calibrated image. This is clearly seen on the stars and on the brightest parts of the galaxies. Unfortunately, the image Rogelio posted is a LRGB combination with a large unbalance of L to RGB exposure times. Therefore, the outer areas of the gals are almost colorless, like the two outer arms of M66.

One must learn to distinguish between color balance and color saturation. This is an important section in the eye training required in this discipline.


Best regards,
Vicent.
 
Hi,

I made a mistake when posting the message, because I did not put the processing applied to each image. It was extremely simple:

1.- Background neutralization.
2.- RGB weighting. In the case of the G2V color calibrated image, I applied the RGB weights calculated in eXcalibrator with PixelMath. In the case of the spiral galaxy color calibration, I set up M66 as the white reference in the ColorCalibration tool.
3.- Histogram adjustment to the RGB combined channel, so the three color channels have the same histogram adjustment.
4.- HDRWT, with the option "Preserve hue" enabled.
5.- Color saturation curve to enhance color. This is imperative to test any color calibration. All images seem good at first instance, but color quickly deviates as we enhance color saturation.

This is the G2V calibrated image, prior to HDRWT and color saturation curve:

leotriplet4.jpg


And this is the spiral gal calibrated one:

leotriplet5.jpg


Of course, we cannot evaluate any color calibration with these images. We need to enhance color saturation to check what we have.


Regards,
Vicent.
 
RBA said:
On the other hand, in the CC image, I see green stars. And that too bothers me.

You will have always green stars, unless you have a huge color cast in your image. G2V calibrated images produce also green stars (highly depending on filter passbands). The problem is because eye spectral sensitivity and usual astronomical RGB filters have very different bandpasses. This is the spectral sensitivity of our eye:

Human_spectral_sensitivity_small.jpg

http://www.normankoren.com

See that red peak is much closer to green peak than they use to be in RGB filters, and also both curves are very much mixed. Being continuum emisors, stars cannot be seen with a greenish tint with this sensitivity curves. So we cannot see green stars. But a filter set like the one below is very prone to show green stars.

AstrodonLRGB_EYellow.jpg

http://www.astrodon.com

After this considerations, we can state that by calibrating to a G2V star doesn't mean we are displaying the same star colors as our eyes see. <-- This is the main argument of G2V color calibration concept.


Regards,
Vicent.
 
I would like to add one more aspect to the discussion: the human perception of color is highly dependent on the environment. If this were not the case, we would see everything with a red base color when seen under ordinary artificial light. This is because most lamps emit light that is much more red than the natural sunlight(=G2V). Luckily, our visual system has something like the automatic white balance function of cameras....

I once was in a room that was lit by lamps with a strong red tone. After a while, everything in this room seemed to have natural color, while everything outside of the room had a strong green cast. This was a very surprising experience for me. If you would like to see something similar: Look at a strong red light for a minute or so, and then look at a supposedly white sheet of paper.

I think it is quite natural for most images to use a color balance that results in white impression for objects that most people would expect to be white. And this usually is the "average" star, some star cluster or galaxy. G2V also does have merits, but it is certainly not the aspect of "true" perceived color.

Georg
 
georg.viehoever said:
I think it is quite natural for most images to use a color balance that results in white impression for objects that most people would expect to be white. And this usually is the "average" star, some star cluster or galaxy. G2V also does have merits, but it is certainly not the aspect of "true" perceived color.

That's the point. I completely assume that, by calibrating the colors with a spiral galaxy, you don't have real colors. In fact, I'm not longer interested in the (boring to me) concept of real color. But, thru the years, a key point of G2V calibration has become to represent star colors as we would see them. This is not tru, IMHO.


V.
 
vicent_peris said:
RBA said:
vicent_peris said:
You will have always green stars, unless you have a huge color cast in your image.
Why do I never see them in any image?
Because there are techniques to limit green values. SCNR is one of them.

But if our image is well color balanced, why "change" the stars that look green to something else?

I'm sorry, this is really weird to me, so please bear with me.

 
vicent_peris said:
But, thru the years, a key point of G2V calibration has become to represent star colors as we would see them. This is not tru, IMHO.

Isn't that what I said a few posts back? Quote: "The problem, as I see it, is not in what method is being used but in believing that using this or that method will give us a more accurate, or neutral image."
I think, as far as methods and paradigms go, I'm done here.

But I'm still very interested about the green stars...

BTW Georg, I did that thing but wearing green glasses (I think they were green) during a Halloween party, for 5 minutes... After taking them off, everything was RED.

 
The point regarding green stars is that the black body radiation, when has its maximum at the green wavelengths, has a fairly flat spectrum in the visible range (well, not that flat, but not enough to reveil a strong green cast). At least, visually you won't be enable to see a green star through a scope; you'll see it as white.
 
Back
Top