PixInsight Forum (historical)

PixInsight => Tutorials and Processing Examples => Topic started by: Juan Conejero on 2010 November 14 15:56:07

Title: About our color calibration methodology
Post by: Juan Conejero on 2010 November 14 15:56:07
Hi everybody,

Recently we have read some interesting discussions about color calibration on deep sky images. In some of these discussions we have seen how the tools that we have implemented in PixInsight, particularly the BackgroundNeutralization and ColorCalibration tools and their underlying methods and algorithms, have been analyzed on the basis of quite inaccurate descriptions and evaluations.

As this has been a recurring topic for a long time, I have decided to write a sticky post to clarify a few important facts about our deep sky color calibration methods and, in a broader way, about our philosophy of color representation in deep sky astrophotography.

In first place, let's describe how our two main color calibration tools work. Fortunately, both tools are now covered by the new PixInsight Reference Documentation, so we can link to the corresponding documents to save a lot of descriptions here:

Official documentation for the BackgroundNeutralization tool (http://pixinsight.com/doc/tools/BackgroundNeutralization/BackgroundNeutralization.html).

Official documentation for the ColorCalibration tool (http://pixinsight.com/doc/tools/ColorCalibration/ColorCalibration.html).

The documentation for ColorCalibration is still incomplete (it lacks some examples and figures), but it provides a sufficiently accurate description as to understand how it actually works.

As you can read on the above document, ColorCalibration can work basically in two different modes: range selection and structure detection. In range selection mode, ColorCalibration allows to select a whole nearby spiral galaxy as a white reference—not the nucleus of a galaxy, as has been said. This method has been devised by Vicent Peris, who has implemented and applied it to process a number of images acquired with large professional telescopes. Here are some images for which this calibration method has been applied strictly:

http://astrofoto.es/M51.jpg
http://pixinsight.com/examples/NGC7331-CAHA/en.html
http://pixinsight.com/examples/M57-CAHA/en.html
http://pixinsight.com/forum/index.php?topic=2457.0

We are working—time permitting—on a formalization of this color calibration method, and Vicent is also working on a survey of reference calibration galaxies covering the entire sky. However as you know both my work load and Vicent's professional work are huge so don't expect this being published very soon.

What happens when there are no galaxies on the image? The above linked NGC 6914 image is a good example. In these cases we simply use the calibration factors computed for a reference galaxy acquired with the same instrument, and apply them to the image in question, taking into account the difference in atmospheric extinction between both images.

The other working mode of ColorCalibration—structure detection—can be used to select a large number of stars to compute a white reference. This is a local calibration method (valid only for the image being calibrated) that has many potential problems. For example, one must be sure that a sufficiently complete set of spectral types is being sampled, or the calibration factors can be strongly biased. When this method is implemented carefully, it can provide quite good results though. Here is an example:

Before ColorCalibration (working in structure detection mode):
http://forum-images.pixinsight.com/1.5-preview/ColorCalibration-1.jpg

After ColorCalibration:
http://forum-images.pixinsight.com/1.5-preview/ColorCalibration-2.jpg

Once we have seen how our tools actually work and how they can be used in practice, let's talk a little about the underlying color philosophy. Common wisdom on color calibration methods for deep sky images relies on considering a particular spectral type as the white reference. In particular, the G2V spectral type—the solar type—is normally used as the white reference. Our color calibration methods don't follow this idea.

Why not rely on any particular spectral type for color calibration? Because we can't find any particularly good reason to use the apparent color of a star as a white reference for deep sky images. A G2V star is an ideal white reference for a daylight scene, in case we want to take the human vision system as the underlying definition of "true color". For example, a G2V star is a plausible white reference for a planetary image. The reason is that all objects in the solar system are either reflecting or radiating the sun's light. So if your image sensor has linear response to incident light and you apply a linear transformation to the image such that a G2V star is rendered as pure white, then the planetary scene will be rendered in true daylight color, or just as it would be seen directly by a human.

In a deep sky image however, no object, in general, is reflecting light from a G2V star. Deep sky images are definitely not daylight scenes, and most of the light that we capture and represent in them is far beyond the capabilities of the human vision system. We think that using a G2V star as a white reference for a deep sky image is a too anthropocentric view. We prefer to follow a completely different path, starting from the idea that no color can be taken as "real" in the deep sky, on a documentary basis. Instead of pursuing the illusion of real color, we try to apply a neutral criterion that pursues a very different goal: to represent a deep sky scene in an unbiased way regarding color, where no particular spectral type or color is being favored over others. In our opinion, this is the best way to provide a plausible color calibration criterion for deep sky astrophotography, both conceptually and physically.

In this way we try to design and implement what we call spectrum-agnostic or documentary calibration methods. These methods pursue maximizing information representation through color in an unbiased way. In Vicent's calibration method, we take the integrated light of a nearby spiral galaxy as white reference. A nearby spiral galaxy with negligible redshift and good viewing conditions as seen from Earth is a plausible documentary white reference because it provides an excellent sample of all stellar populations and spectral types. Each pixel acquired from a galaxy is actually the result of the mixture of light from a large number of different deep sky objects.

In its structure detection mode, the ColorCalibration tool can be used to sample a large number of stars of varying spectral types. This can also be a good documentary white reference—if properly applied—because by averaging a sufficiently representative sample of stars we are not favoring any particular color.

In some cases we have also implemented special variations of the galaxy calibration method. Vicent Peris processed one of the first deep fields pertaining to the ALHAMBRA survey (http://www.iaa.es/alhambra), acquired with the 3.5 m telescope of Calar Alto Observatory:

http://www.caha.es/alhambra-the-history-of-the-universe-at-sight.html

This is the high-resolution image combining 14 of the 23 filters of ALHAMBRA, from 396 nm to 799 nm:

http://www.caha.es/images/stories/PR/Alhambra/alhambra_full_english.jpg

This image shows an extremely deep field. In fact, almost all of the objects that you can see in this image are galaxies, despite they may seem stars. This image poses a big problem in terms of color rendition due to two unique conditions: (1) it integrates light from 14 different filters and (2) the main goal with this image is maximizing representation of distances. Basically, it is very important to clearly differentiate between the most distant galaxies, represented as extremely red objects, and the relatively close ones, represented almost in what we can recognize as "true color", in terms of most "normal" galaxy images.

So instead of using a nearby spiral galaxy and applying the resulting calibration factors—which on the other hand is impossible in this case because no such image exists acquired with the same instrumentation—, the white reference was taken as the average of some of the largest (and hence closest) galaxies in the image. Even if these galaxies can have some significant redshift, the resulting white reference leads to a local calibration procedure that maximizes the representation of object distances through color, which is the main documentary goal of this image.

Title: Re: About our color calibration methodology
Post by: Emanuele on 2010 November 15 01:15:15
You couldn't have been more clear with this post Juan. Wonderful explanation and I totally agree with this method!
Thank you !

Title: Re: About our color calibration methodology
Post by: RBA on 2010 November 15 18:04:01
I don't see anything wrong using what you define as an "anthropocentric view".
And I don't see anything wrong using the methods you propose either.
They're just methods and each will yield different results.

The problem, as I see it, is not in what method is being used but in believing that using this or that method will give us a more accurate, or neutral image.
G2V advocates tend to believe the G2V method works best.
You believe your method is the best too.

To me, these are just different interpretations and presentations. Each one has a place. And I don't think neither has better or worst documentary value, using the word documentary the way I think you understand it.

If Pepito took an image and used the G2V method, great, we know how the colors look using that method.
If Juanito used the "structure detection" method, great, we know how they look using that method.
Neither is showing me the REAL color anyway, right? So I might as well enjoy a view from different perspectives.

I'm flexible and known to be wrong most of the time though ;)

Title: Re: About our color calibration methodology
Post by: Juan Conejero on 2010 November 16 06:55:40
Quote
They're just methods and each will yield different results.

Image processing is all about methods and interpretation of the data. My comment on anthropocentrism has probably been somewhat excessive. I just wanted to stress the fact that G2V applied to deep sky astrophotography is —in our opinion— an extrapolation of premises and concepts that are strictly valid to a human observer under sunlight conditions, and hence incur an important lack of generalization when applied to representations of objects and environments that are very different from sunlight scenes.

Quote
The problem, as I see it, is not in what method is being used but in believing that using this or that method will give us a more accurate, or neutral image.

Accuracy and suitability are two different, not necessarily related concepts. The problem is not with accuracy, which is always a good thing in my opinion, although not necessarily the most useful property of a solution. For color calibration in deep sky astrophotography accuracy is much less important than coherence with the nature and physical properties of the objects represented. And equally important is having a solution able to fulfill the aesthetic and information maximization goals that characterize the astrophotographic work. In this sense, we think that our methods are more comprehensive and efficient.

Quote
To me, these are just different interpretations and presentations. Each one has a place. And I don't think neither has better or worst documentary value, using the word documentary the way I think you understand it.

Well, of course everybody is free to view these topics in his/her own way and apply whatever methods and tools. Following your argument to its logical conclusion, everything has one place in the world. I agree with that idea completely, especially if we are talking about opinions, personal preferences, tastes, beliefs, etc., and as far as one respects the same things of everybody and doesn't try to force anybody to do something.

We are talking here about paradigms of color calibration in astrophotography. Note that what is really important is not the methods by themselves, but the paradigms behind them and how they reflect our understanding of astrophotography, of its purpose, and of the objects that we are representing. So the difference is more conceptual than technical. Both paradigms —the G2V one and our spectral type agnostic paradigm— are conceptually very different, so different that I don't think one can stay equidistant from both. But again I respect your view and if you want/can stay equidistant then that's your choice, although I disagree.

Regarding documentary value, we think that our methods are better because they pursue maximization of information representation through color. By not favoring any particular spectral type as a white reference, all objects are represented according to their physical properties in an unbiased way. For example, with our color calibration methods those objects with larger radial velocities are redder and young stellar populations are bluer, and these differentiations are uniformly maximized across the representable spectrum.

Quote
If Pepito took an image and used the G2V method, great, we know how the colors look using that method.
If Juanito used the "structure detection" method, great, we know how they look using that method.

As noted, what is important is not "how they look" but the concepts and philosophical considerations behind the different methods. Once more, we are conferring more importance to the why and how than to the final results.

Quote
Neither is showing me the REAL color anyway, right?

We think that there is no such thing in deep sky astrophotography. We don't favor any spectral type as a white reference precisely for that reason. Astrophotography is always an interpretation of reality. There are different interpretations but that doesn't mean that all interpretations are equally valid or have the same value for each of us —that's great, since the world would be boring otherwise! ;)
Title: Re: About our color calibration methodology
Post by: jmtanous on 2010 November 16 09:29:51
I strongly believe that we (humans) are the only form of life in the universe with enough spare time to care about taking images of the night sky. So an anthropocentric representation of the universe makes sense! I don't expect that my images are viewed by aliens or dogs...

I use the color calibration tool in PI because it is easy to use and it produces anthropocentric pleasant colors... Not because it produces accurate and unbiased data.

By the way, I jumped into the PI wagon for the color calibration tool. Before I tried many methods with bad results, but the Color Calibration tools is magic!

Cheers,

Jose
Title: Re: About our color calibration methodology
Post by: vicent_peris on 2010 November 16 12:11:40
Hi all,

I want to add something to this discussion, as I am the father of the "spiral galaxy" color calibration method. I just did a direct comparison between my method and the G2V one. I did the comparison with data from my friends Ivette Rodriguez and Oriol LehmKuhl. The selected target is a widefield view of the Leo Triplet. The original image looks very red mainly due to the background sky bias:

(http://www.astrofoto.es/astrofoto/foros/leotriplet1.jpg)

Below you have the image after running eXcalibrator with Sloan data. eXcalibrator used 12 stars and I get very low dispersion values (0.02 in G and 0.03 in B). The resulting RGB weights are 1, 0.746 and 0.682. I also removed the background sky bias:

(http://www.astrofoto.es/astrofoto/foros/leotriplet2.jpg)

And below you have the calibrated image removing the background sky bias and taking M66 as white reference. RGB weights were 1.055, 1.1 and 1.

(http://www.astrofoto.es/astrofoto/foros/leotriplet3.jpg)


My method takes as white reference a spiral galaxy because I consider it as a representative object of the Universe itself. On one hand, galaxies are the bridge between small and large scale Universe structures. On the other hand, spirals are the type of galaxy with a more balanced representation of all the stellar populations. In fact, that's why it works: you will always see hot stars as blue dots, and cold stars as redder ones.

Take into account that this calibration has been done on THIS galaxy. I'm working on this topic to define a "standard" spiral galaxy. So more solid numbers will be given in the upcoming months.



Best regards,
Vicent.
Title: Re: About our color calibration methodology
Post by: RBA on 2010 November 16 13:16:09
Your last image definitely gives a much more anthropocentric view of this famous trio, Vicent  >:D >:D >:D

Anyway, here's an image of this same target that allegedly used eXcalibrator for the color balance:
http://afesan.es/user/cimage/TRIO-FINAL-PUBLICAR--4.jpg

Vicent, it's from one of our compatriots, Antonio F.Sanchez. Pretty anthropocentric as well.

Anyway, there are many people using that tool and creating some very "anthropocentric" images. And I'm not defending the tool, just stating a fact.
Posting one with the colors so terribly balanced as you just did (from this very anthropocentric human perspective) might diminish your credibility rather than the "accuracy" (or lack thereof) of eXcalibrator.
Be careful.
Title: Re: About our color calibration methodology
Post by: mmirot on 2010 November 16 13:36:46
Interesting. The G2V is more red.
I did not think they be would so different.

Max
Title: Re: About our color calibration methodology
Post by: mmirot on 2010 November 16 13:42:09
Your last image definitely gives a much more anthropocentric view of this famous trio, Vicent  >:D >:D >:D

"anthropocentric" images. And I'm not defending the tool, just stating a fact.
Posting one with the colors so terribly balanced as you just did (from this very anthropocentric human perspective) might diminish your credibility rather than the "accuracy" (or lack thereof) of eXcalibrator.
Be careful.


I did not think Vicent Excalibrator image was balanced that much differently.  It still has redder tone.
Certainly, Antonio's image looks better but is fully processed.

Max

Title: Re: About our color calibration methodology
Post by: vicent_peris on 2010 November 16 14:16:19
Your last image definitely gives a much more anthropocentric view of this famous trio, Vicent  >:D >:D >:D

"anthropocentric" images. And I'm not defending the tool, just stating a fact.
Posting one with the colors so terribly balanced as you just did (from this very anthropocentric human perspective) might diminish your credibility rather than the "accuracy" (or lack thereof) of eXcalibrator.
Be careful.


I did not think Vicent Excalibrator image was balanced that much differently.  It still has redder tone.
Certainly, Antonio's image looks better but is fully processed.

Max



Hi,

in fact, if you load the image in PixInsight and apply a saturation curve, you will see that the gals have similar reds as my G2V calibrated image. This is clearly seen on the stars and on the brightest parts of the galaxies. Unfortunately, the image Rogelio posted is a LRGB combination with a large unbalance of L to RGB exposure times. Therefore, the outer areas of the gals are almost colorless, like the two outer arms of M66.

One must learn to distinguish between color balance and color saturation. This is an important section in the eye training required in this discipline.


Best regards,
Vicent.
Title: Re: About our color calibration methodology
Post by: vicent_peris on 2010 November 17 02:15:08
Hi,

I made a mistake when posting the message, because I did not put the processing applied to each image. It was extremely simple:

1.- Background neutralization.
2.- RGB weighting. In the case of the G2V color calibrated image, I applied the RGB weights calculated in eXcalibrator with PixelMath. In the case of the spiral galaxy color calibration, I set up M66 as the white reference in the ColorCalibration tool.
3.- Histogram adjustment to the RGB combined channel, so the three color channels have the same histogram adjustment.
4.- HDRWT, with the option "Preserve hue" enabled.
5.- Color saturation curve to enhance color. This is imperative to test any color calibration. All images seem good at first instance, but color quickly deviates as we enhance color saturation.

This is the G2V calibrated image, prior to HDRWT and color saturation curve:

(http://www.astrofoto.es/astrofoto/foros/leotriplet4.jpg)

And this is the spiral gal calibrated one:

(http://www.astrofoto.es/astrofoto/foros/leotriplet5.jpg)

Of course, we cannot evaluate any color calibration with these images. We need to enhance color saturation to check what we have.


Regards,
Vicent.
Title: Re: About our color calibration methodology
Post by: vicent_peris on 2010 November 17 02:31:14
On the other hand, in the CC image, I see green stars. And that too bothers me.

You will have always green stars, unless you have a huge color cast in your image. G2V calibrated images produce also green stars (highly depending on filter passbands). The problem is because eye spectral sensitivity and usual astronomical RGB filters have very different bandpasses. This is the spectral sensitivity of our eye:

(http://www.normankoren.com/Human_spectral_sensitivity_small.jpg)
http://www.normankoren.com (http://www.normankoren.com)

See that red peak is much closer to green peak than they use to be in RGB filters, and also both curves are very much mixed. Being continuum emisors, stars cannot be seen with a greenish tint with this sensitivity curves. So we cannot see green stars. But a filter set like the one below is very prone to show green stars.

(http://www.astrodon.com/custom/_2e2a/content/images/AstrodonLRGB_EYellow.jpg)
http://www.astrodon.com (http://www.astrodon.com)

After this considerations, we can state that by calibrating to a G2V star doesn't mean we are displaying the same star colors as our eyes see. <-- This is the main argument of G2V color calibration concept.


Regards,
Vicent.
Title: Re: About our color calibration methodology
Post by: RBA on 2010 November 17 04:45:27
You will have always green stars, unless you have a huge color cast in your image.

Why do I never see them in any image?



Title: Re: About our color calibration methodology
Post by: georg.viehoever on 2010 November 17 04:52:17
I would like to add one more aspect to the discussion: the human perception of color is highly dependent on the environment. If this were not the case, we would see everything with a red base color when seen under ordinary artificial light. This is because most lamps emit light that is much more red than the natural sunlight(=G2V). Luckily, our visual system has something like the automatic white balance function of cameras....

I once was in a room that was lit by lamps with a strong red tone. After a while, everything in this room seemed to have natural color, while everything outside of the room had a strong green cast. This was a very surprising experience for me. If you would like to see something similar: Look at a strong red light for a minute or so, and then look at a supposedly white sheet of paper.

I think it is quite natural for most images to use a color balance that results in white impression for objects that most people would expect to be white. And this usually is the "average" star, some star cluster or galaxy. G2V also does have merits, but it is certainly not the aspect of "true" perceived color.

Georg
Title: Re: About our color calibration methodology
Post by: vicent_peris on 2010 November 17 04:52:41
You will have always green stars, unless you have a huge color cast in your image.

Why do I never see them in any image?





Because there are techniques to limit green values. SCNR is one of them.


V.
Title: Re: About our color calibration methodology
Post by: vicent_peris on 2010 November 17 04:57:55
I think it is quite natural for most images to use a color balance that results in white impression for objects that most people would expect to be white. And this usually is the "average" star, some star cluster or galaxy. G2V also does have merits, but it is certainly not the aspect of "true" perceived color.

That's the point. I completely assume that, by calibrating the colors with a spiral galaxy, you don't have real colors. In fact, I'm not longer interested in the (boring to me) concept of real color. But, thru the years, a key point of G2V calibration has become to represent star colors as we would see them. This is not tru, IMHO.


V.
Title: Re: About our color calibration methodology
Post by: RBA on 2010 November 17 05:21:50
You will have always green stars, unless you have a huge color cast in your image.
Why do I never see them in any image?
Because there are techniques to limit green values. SCNR is one of them.

But if our image is well color balanced, why "change" the stars that look green to something else?

I'm sorry, this is really weird to me, so please bear with me.

Title: Re: About our color calibration methodology
Post by: RBA on 2010 November 17 05:38:50
But, thru the years, a key point of G2V calibration has become to represent star colors as we would see them. This is not tru, IMHO.

Isn't that what I said a few posts back? Quote: "The problem, as I see it, is not in what method is being used but in believing that using this or that method will give us a more accurate, or neutral image."
I think, as far as methods and paradigms go, I'm done here.

But I'm still very interested about the green stars...

BTW Georg, I did that thing but wearing green glasses (I think they were green) during a Halloween party, for 5 minutes... After taking them off, everything was RED.

Title: Re: About our color calibration methodology
Post by: Carlos Milovic on 2010 November 17 05:47:47
The point regarding green stars is that the black body radiation, when has its maximum at the green wavelengths, has a fairly flat spectrum in the visible range (well, not that flat, but not enough to reveil a strong green cast). At least, visually you won't be enable to see a green star through a scope; you'll see it as white.
Title: Re: About our color calibration methodology
Post by: Carlos Milovic on 2010 November 17 05:50:58
See here for examples:
http://www.newworldencyclopedia.org/entry/Image:PlanckianLocus.png

http://www.midnightkite.com/color.html
Title: Re: About our color calibration methodology
Post by: vicent_peris on 2010 November 17 06:11:04
See here for examples:
http://www.newworldencyclopedia.org/entry/Image:PlanckianLocus.png

http://www.midnightkite.com/color.html

Yes, this is another point. But this is emphasized by the fact that R and G filters are usually far more separated than our eye's sensitivity curves.

V.
Title: Re: About our color calibration methodology
Post by: RBA on 2010 November 17 07:31:48
Thanks for the responses so far...

But I'm sorry, has my last question been answered?

I understand why we do not see green stars. I think I understand why while our eyes can't see them green, certain filters could produce them...

My last question is, if our image shows green stars, why do we "change" them in our image to something else?

Title: Re: About our color calibration methodology
Post by: mmirot on 2010 November 17 08:10:46
Looking at the examples it appears the is a some blue green fringes around many of the stars. This is more prominent with the PI methoid in this image. I think this increases the green star preception.

What do you think?

In general, I don't see green stars in my images using the PI methoid. 
Generally, I am using stars-structure rather than a galaxy.

Max

Title: Re: About our color calibration methodology
Post by: vicent_peris on 2010 November 17 08:25:42
Looking at the examples it appears the is a some blue green fringes around many of the stars. This is more prominent with the PI methoid in this image. I think this increases the green star preception.

What do you think?

In general, I don't see green stars in my images using the PI methoid. 
Generally, I am using stars-structure rather than a galaxy.

Max



Hi Max,

in my experience, star colors vary largely with filter transmission / sensor QE curves. For example, I remember that, when I worked with DSLR, I always had pink stars. :)

Honestly, I don't know which filter set Oriol and Ivette used for this image.


V.
Title: Re: About our color calibration methodology
Post by: Juan Conejero on 2010 November 17 08:54:29
Quote
if our image shows green stars, why do we "change" them in our image to something else?

Ideally, if we acquire RGB images with a filter set well correlated with human vision, then there should be no green stars. For example, it is very difficult (impossible?) to get any green stars with a DSLR camera. If our filters don't have good crossovers and/or their transmittance peaks don't correspond well to the sensitivity peaks of human vision for each color, then we can consider green stars as an instrumental defect —on the basis that our initial goal is reproducing red, green and blue as we perceive them under normal conditions— and try to fix them. This is how I see this problem, at least; others may have different interpretations.
Title: Re: About our color calibration methodology
Post by: mmirot on 2010 November 17 09:11:32
Looking at the examples it appears the is a some blue green fringes around many of the stars. This is more prominent with the PI methoid in this image. I think this increases the green star preception.

What do you think?

In general, I don't see green stars in my images using the PI methoid. 
Generally, I am using stars-structure rather than a galaxy.

Max



Hi Max,

in my experience, star colors vary largely with filter transmission / sensor QE curves. For example, I remember that, when I worked with DSLR, I always had pink stars. :)

Honestly, I don't know which filter set Oriol and Ivette used for this image.


V.

Looking back on the examples.
The first set had very bad green or blue green fringes on the stars.
Perhaps,  increased FWHM of blue and or Green?
This makes the stars appear much more green especially in the PI cal image.

The reprocessed second set is much better. That is, the fringing is not as bad and star colors appear less green.

What I am saying the fringes are biasing our preciptions to some degree. The central color may not be as green as one may think.

Max


Title: Re: About our color calibration methodology
Post by: Juan Conejero on 2010 November 17 10:03:31
We can explore green pixels easily with PixelMath. See this example with the galaxy-calibrated image:

(http://forum-images.pixinsight.com/20101117/green-detect-1-tn.jpg)
Click to see a full size version. (http://forum-images.pixinsight.com/20101117/green-detect-1.jpg)

The PixelMath expression is:

0.4 * ($T[1] > $T[0] && $T[1] > $T[2])

This expression builds a binary mask that is white only for pixels where the green component is greater than both red and blue. Then multiplies it by 0.4 to generate a translucent mask. The mask must be activated inverted.

As shown on the screenshot above, there are indeed a few green stars in this image. However the above expression is qualitative. To quantify the excess of green we can use a slightly modified version:

(http://forum-images.pixinsight.com/20101117/green-detect-2-tn.jpg)
Click to see a full size version. (http://forum-images.pixinsight.com/20101117/green-detect-2.jpg)

The above example uses this expression:

0.4 * ($T[1] - $T[0] > t && $T[1] - $T[2] > t)

with the symbol t = 0.025. The mask generated this way is not black only for pixels where the green component is a 2.5% larger than red and blue. With this constraint, we see that only a few peripheral areas of stars are being represented with a 2.5% green dominant or greater.

Just to add a bit of analysis :)
Title: Re: About our color calibration methodology
Post by: RBA on 2010 November 17 10:20:25
Quote
if our image shows green stars, why do we "change" them in our image to something else?

Ideally, if we acquire RGB images with a filter set well correlated with human vision, then there should be no green stars. For example, it is very difficult (impossible?) to get any green stars with a DSLR camera. If our filters don't have good crossovers and/or their transmittance peaks don't correspond well to the sensitivity peaks of human vision for each color, then we can consider green stars as an instrumental defect —on the basis that our initial goal is reproducing red, green and blue as we perceive them under normal conditions— and try to fix them. This is how I see this problem, at least; others may have different interpretations.

I can see that, but if the problem is that the filters transmittance peaks don't correspond well to the sensitivity peaks of human vision... and ...our initial goal is reproducing red, green and blue as we perceive them under normal conditions, but the principle being discussed is precisely that our vision system shouldn't be used as a reference, shouldn't those who try to avoid "the human bias" then leave the "green" stars alone?

Note I have no problem killing green stars, and I will continue doing it if I get them (most of the times I do a SCNR pass as part of my workflow regardless, anyway). I'm just curious about this whole thing...


Title: Re: About our color calibration methodology
Post by: Juan Conejero on 2010 November 17 10:53:45
Quote
but the principle being discussed is precisely that our vision system shouldn't be used as a reference, shouldn't those who try to avoid "the human bias" then leave the "green" stars alone?

We have two different items here. They may seem related but they are not.

When we say that the human vision system is not suitable as a reference for color calibration of representations of the deep sky, we refer to using solar type stars as white references with the purpose of achieving "real color". We say that such thing is an illusion, for the reasons that we have explained in this thread and others. Actually, the problem is not with G2V. One could use any spectral type as white and our objections would be the same. We have described other methods, better in our opinion, that do not consider (or, to be more precise, try to avoid considering) any particular spectral type as a white reference.

The other item is how we represent different wavelengths in our images. This is a completely arbitrary decision. Mapping wavelengths to the usual red-green-blue sequence is just one possibility. For example, in narrowband imaging, we often represent different emissions with arbitrary color palettes, often completely unrelated to their position on the spectrum, in terms of the human vision. The Hubble palette is a palmary example. As another example, one could perform a color calibration with Vicent's galaxy method and then remap colors by swapping red and green, or red and blue, etc. Why not, if for a given image that remapping achieves a better visual separation of structures, or any other improvement that could be desirable for documentary or communication purposes? Or just because the author thinks the image has more aesthetic value in that way. As long as coherent and well-founded criteria are used, we have no problems with that things.
Title: Re: About our color calibration methodology
Post by: RBA on 2010 November 17 11:28:12
Why not, if for a given image that remapping achieves a better visual separation of structures, or any other improvement that could be desirable for documentary or communication purposes? Or just because the author thinks the image has more aesthetic value in that way. As long as coherent and well-founded criteria are used, we have no problems with that things.

Exactly... Why not?  But then, don't you depart from the unbiased philosophy of color?
I believe that changing the color of stars for documentary, communications or aesthetic purposes is not unbiased, but please correct me if I'm wrong.


Title: Re: About our color calibration methodology
Post by: Juan Conejero on 2010 November 18 01:28:06
Quote
Credibility can be diminished not when someone cheats, but when they don't provide enough proof (burden of proof).
And that is what Vicent did this time with his first post, just three jpegs and "have a go at it"... which is the post that caused me to warn him to be careful.

I disagree. I think Vicent provided a good amount of proof:

- He published the raw image, the image calibrated with eXcalibrator, and the image calibrated with PixInsight. These are not "just three jpegs"; they are stretched JPEG versions of the working images. Obviously, since the working images are linear —because a color calibration procedure cannot be carried out with nonlinear images—, the JPEG versions have been stretched and slightly processed in order to evaluate the results; otherwise they would be almost black! We figured that both facts —linearity of the data and the need to show stretched versions— are so obvious that it was unnecessary to comment on them. Of course, both resulting images received strictly the same treatment, except the color calibration step which is the target of the test, because otherwise the test would be invalid. That's completely out of question and again is so obvious that no explanations seemed necessary.

- Detailed numeric data about the eXcalibrator calibration: the number of stars used (12), the survey (Sloan), the dispersion in the results (0.02 in G and 0.03 in B), and the resulting weights (R=1, G=0.746 and B=0.682).

- Detailed data about the color calibration performed with ColorCalibration in PixInsight: the galaxy used as white reference (M66) and the resulting weights (R=1.055, G=1.1 and B=1).

We are used to work —especially Vicent— in academic environments, so we really know something on how to make objective tests and comparisons. Many examples and tests published in professional academic scientific papers provide less data than what I've described above.

Of course one can ask for more explanations, detailed descriptions and more quantitative data (although in this case there are really no more quantitative data beyond the numbers that have already been published). Those requests are logical; I know Vicent is working on an elaborated and explanatory step-by-step example right now, including screenshots.

The quality and value of an objective test can be criticized or questioned based on objective technical criteria. It can also be discussed based on conceptual and philosophical considerations. However, what cannot happen —should never happen— is that the credibility of the author gets diminished because the JPEG versions of the resulting images are not as somebody would expect, or because they just "look weird". That is an unscientific attitude.

Quote
The talk-behind-the-back in this discipline (and so many others) sometimes is quite bad, unfortunately.

That's true but I am not interested in that. So case closed for me too.
Title: Re: About our color calibration methodology
Post by: georg.viehoever on 2010 November 18 08:04:25
Regarding the question why we usually dont see green casts in normal images: This might come from the fact that using HistogramTransforms reduces the strength of colors for bright objects. I did experiments on this in http://pixinsight.com/forum/index.php?topic=1689.msg10371#msg10371, see top row of screenshot. You can recover some of the color by using a strong saturation boost, and color then is most prominent in not-so-bright portions of the image. This might be the reasons why in Vicent's image, the green cast in mostly visible in halo of stars or with rather dim stars.

Just a theory.

Georg
Title: Re: About our color calibration methodology
Post by: edif300 on 2010 November 19 14:37:11
Even if we aren't able to see them in green colour, are there any green stars in the universe ?

With a perfect filters... on which colour would be registered in a ccd?
Title: Re: About our color calibration methodology
Post by: georg.viehoever on 2010 November 22 01:33:00
Hi Edif,

actually, the sun is a star that emits most of its energy in green light. Only our visual system is calibrated to see it as white .

All stars (as opposed to some nebula, such as H-alpha) emit a continuous spectrum of light. The balance of light depends on their temperature - this is why we perceive hot stars as blue and cool stars as red. Depending on how we weight the different contributions of light, we will see different colors - it is easy to choose a white balance that will turn Arkturus into a white star...

See http://pixinsight.com/forum/index.php?topic=2562.msg17275#msg17275 for additional information.

Georg
Title: Re: About our color calibration methodology
Post by: edif300 on 2010 November 22 13:17:25
Thanks Georg for reply.

Best regards,
Iñaki
Title: Re: About our color calibration methodology
Post by: neuling on 2012 June 10 09:30:30
What do you think about the methode used in Theli, the images are registered on a star catalog and a colorcalibration is also used on it.
Title: Re: About our color calibration methodology
Post by: Geoff on 2012 June 10 15:21:12
Hi Edif,

actually, the sun is a star that emits most of its energy in green light. Only our visual system is calibrated to see it as white .
And our visual system can change its calibration very rapidly. Move to a planet orbiting a red dwarf and it will look white within a day.  Another example I like is the experience of looking at a computer shielded by a red screen while photographing. The screen looks quite normal once you have been looking at it for a while. Glance quickly at Jupiter after looking at the screen and Jupiter is a nice green colour.
Geoff
Title: Re: About our color calibration methodology
Post by: Juan Conejero on 2012 June 13 00:27:27
Quote
What do you think about the methode used in Theli, the images are registered on a star catalog and a colorcalibration is also used on it.

Another implementation of the G2V method. The problem is not with the accuracy of star measurements (although there are significant errors in the photometry data of large catalogs). Color in deep-sky astrophotography is a purely conceptual matter. The accuracy of an implementation is much less important than the reasons and the understanding behind the decision of choosing a particular type of objects as a white reference.

I have already sufficiently explained my opinions about the ideas of "real color" and "natural color" in DS AP. I have also said what I think about the G2V method many times. Color in DS AP isn't a matter of "how", but a matter of "why".