About our color calibration methodology

Carlos Milovic said:
See here for examples:
http://www.newworldencyclopedia.org/entry/Image:planckianLocus.png

http://www.midnightkite.com/color.html

Yes, this is another point. But this is emphasized by the fact that R and G filters are usually far more separated than our eye's sensitivity curves.

V.
 
Thanks for the responses so far...

But I'm sorry, has my last question been answered?

I understand why we do not see green stars. I think I understand why while our eyes can't see them green, certain filters could produce them...

My last question is, if our image shows green stars, why do we "change" them in our image to something else?

 
Looking at the examples it appears the is a some blue green fringes around many of the stars. This is more prominent with the PI methoid in this image. I think this increases the green star preception.

What do you think?

In general, I don't see green stars in my images using the PI methoid. 
Generally, I am using stars-structure rather than a galaxy.

Max

 
mmirot said:
Looking at the examples it appears the is a some blue green fringes around many of the stars. This is more prominent with the PI methoid in this image. I think this increases the green star preception.

What do you think?

In general, I don't see green stars in my images using the PI methoid. 
Generally, I am using stars-structure rather than a galaxy.

Max

Hi Max,

in my experience, star colors vary largely with filter transmission / sensor QE curves. For example, I remember that, when I worked with DSLR, I always had pink stars. :)

Honestly, I don't know which filter set Oriol and Ivette used for this image.


V.
 
if our image shows green stars, why do we "change" them in our image to something else?

Ideally, if we acquire RGB images with a filter set well correlated with human vision, then there should be no green stars. For example, it is very difficult (impossible?) to get any green stars with a DSLR camera. If our filters don't have good crossovers and/or their transmittance peaks don't correspond well to the sensitivity peaks of human vision for each color, then we can consider green stars as an instrumental defect ?on the basis that our initial goal is reproducing red, green and blue as we perceive them under normal conditions? and try to fix them. This is how I see this problem, at least; others may have different interpretations.
 
vicent_peris said:
mmirot said:
Looking at the examples it appears the is a some blue green fringes around many of the stars. This is more prominent with the PI methoid in this image. I think this increases the green star preception.

What do you think?

In general, I don't see green stars in my images using the PI methoid. 
Generally, I am using stars-structure rather than a galaxy.

Max

Hi Max,

in my experience, star colors vary largely with filter transmission / sensor QE curves. For example, I remember that, when I worked with DSLR, I always had pink stars. :)

Honestly, I don't know which filter set Oriol and Ivette used for this image.


V.

Looking back on the examples.
The first set had very bad green or blue green fringes on the stars.
Perhaps,  increased FWHM of blue and or Green?
This makes the stars appear much more green especially in the PI cal image.

The reprocessed second set is much better. That is, the fringing is not as bad and star colors appear less green.

What I am saying the fringes are biasing our preciptions to some degree. The central color may not be as green as one may think.

Max


 
We can explore green pixels easily with PixelMath. See this example with the galaxy-calibrated image:


The PixelMath expression is:

Code:
0.4 * ($T[1] > $T[0] && $T[1] > $T[2])

This expression builds a binary mask that is white only for pixels where the green component is greater than both red and blue. Then multiplies it by 0.4 to generate a translucent mask. The mask must be activated inverted.

As shown on the screenshot above, there are indeed a few green stars in this image. However the above expression is qualitative. To quantify the excess of green we can use a slightly modified version:


The above example uses this expression:

Code:
0.4 * ($T[1] - $T[0] > t && $T[1] - $T[2] > t)

with the symbol
Code:
t = 0.025
. The mask generated this way is not black only for pixels where the green component is a 2.5% larger than red and blue. With this constraint, we see that only a few peripheral areas of stars are being represented with a 2.5% green dominant or greater.

Just to add a bit of analysis :)
 
Juan Conejero said:
if our image shows green stars, why do we "change" them in our image to something else?

Ideally, if we acquire RGB images with a filter set well correlated with human vision, then there should be no green stars. For example, it is very difficult (impossible?) to get any green stars with a DSLR camera. If our filters don't have good crossovers and/or their transmittance peaks don't correspond well to the sensitivity peaks of human vision for each color, then we can consider green stars as an instrumental defect ?on the basis that our initial goal is reproducing red, green and blue as we perceive them under normal conditions? and try to fix them. This is how I see this problem, at least; others may have different interpretations.

I can see that, but if the problem is that the filters transmittance peaks don't correspond well to the sensitivity peaks of human vision... and ...our initial goal is reproducing red, green and blue as we perceive them under normal conditions, but the principle being discussed is precisely that our vision system shouldn't be used as a reference, shouldn't those who try to avoid "the human bias" then leave the "green" stars alone?

Note I have no problem killing green stars, and I will continue doing it if I get them (most of the times I do a SCNR pass as part of my workflow regardless, anyway). I'm just curious about this whole thing...


 
but the principle being discussed is precisely that our vision system shouldn't be used as a reference, shouldn't those who try to avoid "the human bias" then leave the "green" stars alone?

We have two different items here. They may seem related but they are not.

When we say that the human vision system is not suitable as a reference for color calibration of representations of the deep sky, we refer to using solar type stars as white references with the purpose of achieving "real color". We say that such thing is an illusion, for the reasons that we have explained in this thread and others. Actually, the problem is not with G2V. One could use any spectral type as white and our objections would be the same. We have described other methods, better in our opinion, that do not consider (or, to be more precise, try to avoid considering) any particular spectral type as a white reference.

The other item is how we represent different wavelengths in our images. This is a completely arbitrary decision. Mapping wavelengths to the usual red-green-blue sequence is just one possibility. For example, in narrowband imaging, we often represent different emissions with arbitrary color palettes, often completely unrelated to their position on the spectrum, in terms of the human vision. The Hubble palette is a palmary example. As another example, one could perform a color calibration with Vicent's galaxy method and then remap colors by swapping red and green, or red and blue, etc. Why not, if for a given image that remapping achieves a better visual separation of structures, or any other improvement that could be desirable for documentary or communication purposes? Or just because the author thinks the image has more aesthetic value in that way. As long as coherent and well-founded criteria are used, we have no problems with that things.
 
Juan Conejero said:
Why not, if for a given image that remapping achieves a better visual separation of structures, or any other improvement that could be desirable for documentary or communication purposes? Or just because the author thinks the image has more aesthetic value in that way. As long as coherent and well-founded criteria are used, we have no problems with that things.

Exactly... Why not?  But then, don't you depart from the unbiased philosophy of color?
I believe that changing the color of stars for documentary, communications or aesthetic purposes is not unbiased, but please correct me if I'm wrong.


 
Credibility can be diminished not when someone cheats, but when they don't provide enough proof (burden of proof).
And that is what Vicent did this time with his first post, just three jpegs and "have a go at it"... which is the post that caused me to warn him to be careful.

I disagree. I think Vicent provided a good amount of proof:

- He published the raw image, the image calibrated with eXcalibrator, and the image calibrated with PixInsight. These are not "just three jpegs"; they are stretched JPEG versions of the working images. Obviously, since the working images are linear ?because a color calibration procedure cannot be carried out with nonlinear images?, the JPEG versions have been stretched and slightly processed in order to evaluate the results; otherwise they would be almost black! We figured that both facts ?linearity of the data and the need to show stretched versions? are so obvious that it was unnecessary to comment on them. Of course, both resulting images received strictly the same treatment, except the color calibration step which is the target of the test, because otherwise the test would be invalid. That's completely out of question and again is so obvious that no explanations seemed necessary.

- Detailed numeric data about the eXcalibrator calibration: the number of stars used (12), the survey (Sloan), the dispersion in the results (0.02 in G and 0.03 in B), and the resulting weights (R=1, G=0.746 and B=0.682).

- Detailed data about the color calibration performed with ColorCalibration in PixInsight: the galaxy used as white reference (M66) and the resulting weights (R=1.055, G=1.1 and B=1).

We are used to work ?especially Vicent? in academic environments, so we really know something on how to make objective tests and comparisons. Many examples and tests published in professional academic scientific papers provide less data than what I've described above.

Of course one can ask for more explanations, detailed descriptions and more quantitative data (although in this case there are really no more quantitative data beyond the numbers that have already been published). Those requests are logical; I know Vicent is working on an elaborated and explanatory step-by-step example right now, including screenshots.

The quality and value of an objective test can be criticized or questioned based on objective technical criteria. It can also be discussed based on conceptual and philosophical considerations. However, what cannot happen ?should never happen? is that the credibility of the author gets diminished because the JPEG versions of the resulting images are not as somebody would expect, or because they just "look weird". That is an unscientific attitude.

The talk-behind-the-back in this discipline (and so many others) sometimes is quite bad, unfortunately.

That's true but I am not interested in that. So case closed for me too.
 
Regarding the question why we usually dont see green casts in normal images: This might come from the fact that using HistogramTransforms reduces the strength of colors for bright objects. I did experiments on this in http://pixinsight.com/forum/index.php?topic=1689.msg10371#msg10371, see top row of screenshot. You can recover some of the color by using a strong saturation boost, and color then is most prominent in not-so-bright portions of the image. This might be the reasons why in Vicent's image, the green cast in mostly visible in halo of stars or with rather dim stars.

Just a theory.

Georg
 
Even if we aren't able to see them in green colour, are there any green stars in the universe ?

With a perfect filters... on which colour would be registered in a ccd?
 
Hi Edif,

actually, the sun is a star that emits most of its energy in green light. Only our visual system is calibrated to see it as white .

All stars (as opposed to some nebula, such as H-alpha) emit a continuous spectrum of light. The balance of light depends on their temperature - this is why we perceive hot stars as blue and cool stars as red. Depending on how we weight the different contributions of light, we will see different colors - it is easy to choose a white balance that will turn Arkturus into a white star...

See http://pixinsight.com/forum/index.php?topic=2562.msg17275#msg17275 for additional information.

Georg
 
What do you think about the methode used in Theli, the images are registered on a star catalog and a colorcalibration is also used on it.
 
georg.viehoever said:
Hi Edif,

actually, the sun is a star that emits most of its energy in green light. Only our visual system is calibrated to see it as white .
And our visual system can change its calibration very rapidly. Move to a planet orbiting a red dwarf and it will look white within a day.  Another example I like is the experience of looking at a computer shielded by a red screen while photographing. The screen looks quite normal once you have been looking at it for a while. Glance quickly at Jupiter after looking at the screen and Jupiter is a nice green colour.
Geoff
 
What do you think about the methode used in Theli, the images are registered on a star catalog and a colorcalibration is also used on it.

Another implementation of the G2V method. The problem is not with the accuracy of star measurements (although there are significant errors in the photometry data of large catalogs). Color in deep-sky astrophotography is a purely conceptual matter. The accuracy of an implementation is much less important than the reasons and the understanding behind the decision of choosing a particular type of objects as a white reference.

I have already sufficiently explained my opinions about the ideas of "real color" and "natural color" in DS AP. I have also said what I think about the G2V method many times. Color in DS AP isn't a matter of "how", but a matter of "why".
 
Can somebody explain how this compares to what is done by PhotoMetricColorCalibration?

I understand that using ColorCalibration gives you more control over how the image is Color Calibrated by specifying the White reference. But does PhotoMetric Color Calibration give you the results described by Juan? Or is it something different?
 
Back
Top