New in PixInsight 1.8.5: PhotometricColorCalibration

Bob

I would like to apologize for the mean comment and others. I was taught better then this by my parents.

I just finished watching the video of the car rampage in Times Square and that brings things like software issues back into perspective.

Again sorry Bob.


Mike
 
Hi Bob,

bob_franke said:
However, what if there is intervening galactic extinction? How do you show a galaxy with its intrinsic color and still correctly display the foreground stars?

I think it's better to show the foreground stars correctly and let the color of the galaxy include the extinction. I call this the caf? doctrine... that is Color As From Earth. Or probably more correctly, color from Earth orbit. :)

We are applying the white point to the picture by measuring the star fluxes. From those fluxes, we calculate the RGB weights that would have an unreddened face-on spiral galaxy. This means that, if your spiral galaxy is reddened by galactic extinction, it will show up in the calibrated picture as a redder galaxy than the spiral galaxy model that we are applying as white reference.

Also, I am a bit fuzzy with your basic color theory. Can you give us a definition of a "documentary goal"?

Please, take a look at this document.

Best regarfds,
Vicent.
 
Color calibration is always an engaging topic, that can easily bring out passionate views and opinions.
The way I see it (no pun intended), color is about spectral information within a certain bandwidth. So if one is able to piece together a procedure from detection to final reproduction (say, through a display), that reproduces accurately such information, then that is the ideal case. Now, in astrophotography we like to go beyond human vision capabilities so that frequent astrophysical phenomena is not left out (ie, H-alpha emissions). Then the question arises as how to remap such part of the spectrum into the visible range. One possible approach is to compress the spectral information around some middle point in the greens, so that all information is preserved (and could be recovered by an inverse transformation). This, again, is the ideal case.

In practice, we detect color through three components (typically RGB), that result from passing the signal through three different filters before hitting a detector. Such filters and detectors are not standardized among amateurs, and they add their own idiosyncrasies to color registration. In fact, the problem becomes infinite dimensional if different filter/detector profiles are to be taken into account (to match those used in photometric catalogs). On top of this, we like to filter out unwanted additive signals, such as LP and air-glow, and the question of spectral information becomes even more intractable.

What I really like about color calibration using photometric data from different spectral type stars, is that it deals with many of the these problems at the same time, if a rich enough model if fitted.

So, the question remains if a model with three scaling parameters is good enough to recover accurate color balance, given the infinite dimensional nature of the problem. I know for a fact this is not the case with DSLRs (whose filters have significant cross-talk), where at least a matrix transformation is required. But given the unstandardized nature of color filters transmittance, and detectors spectral QE profiles, I wonder... And now that we will have access to hundreds of data points via photometric catalogs, maybe is time to get more ambitious!

best,
Ignacio
 
1.8.5 will be the best, most powerful and most stable version of PixInsight ever

Not strictly true, Juan - the 'best' version, IMHO, was that original version of PixelInsight LE - it was the pebble that you dropped in the ocean that started the tsunami that we all now know and enjoy today as PixInsight.

Yet, somehow, you constantly seem to be able to amaze us!
 
Ignacio said:
So, the question remains if a model with three scaling parameters is good enough to recover accurate color balance, given the infinite dimensional nature of the problem. I know for a fact this is not the case with DSLRs (whose filters have significant cross-talk), where at least a matrix transformation is required. But given the unstandardized nature of color filters transmittance, and detectors spectral QE profiles, I wonder... And now that we will have access to hundreds of data points via photometric catalogs, maybe is time to get more ambitious!

Hi,

This is a different problem. With this tool or any other that calibrates a white point in your picture, you multiply the RGB channels by a calculated ratio. This operation neutralizes the white reference, so you're sure that the white reference will be white in your picture. On the other hand, objects differing from that white point will have different colors on each image depending on several characteristics of the optical system and the acquisition conditions: the atmospheric extinction, the QE curve, or the filter transmission curve. This implies a higher order correction that, at this moment, cannot be done because we would need standard stars measured with RGB filters. As of today, all the photometric catalogs are built with photometric filters. So, there is still a long way to go in this field...

Anyway, the main issue is to have a solid white reference in your picture. Now, you have a complete solution in PixInsight since it let's you choose between a white reference relative to your picture (by using ColorlCalibration) or an absolute reference by using photometric models.

Our implementation has some very strong points:

- The calculated astrometry is extremely accurate thanks to the ImageSolver script by Andr?s del Pozo. This is far superior to any other astrometry solution because it uses splines to correct geometrical distorsions. Far superior to WCS or any other solution using polynomials (I could show you some examples that are *impossible* to solve by using polynomials).

- The AperturePhotometry script I designed with Andr?s is also very powerful and flexible, thus allowing us to implement a great variety of new tools based on photometry. In the next version you'll be able to use PSF photometry as well, that can be very powerful for crowded fields or in images with heavy optical aberrations.

- Trust me, the linear fit algorithm in PixInsight is magic. It is far better than a linear regression, and this tool woudn't work at all without a truly robust linear fitting. The Milky Way and the LMC cases are really difficult to calibrate. We really didn't expect any good result from these two images... And it worked! :)

Beyond this tool, Juan evolved the development platform, so now you can build new modules that use JavaScript scripts. This will be very important in the future. A direct benefit is that you'll be able to use scripts with non modal windows simply by developing the UI in C++.


Best regards,
Vicent.
 
Thanks for adding this feature!  Color is one of the tougher areas of PixInsight to get good results in.  Will the new DynamicBackground tool be in 1.8.5 too?
 
akulapanam said:
Thanks for adding this feature!  Color is one of the tougher areas of PixInsight to get good results in.  Will the new DynamicBackground tool be in 1.8.5 too?

DynamicBackground, which will replace DBE with a much more advanced and flexible interactive background modeling tool, will be available during the 1.8.5 cycle. The new tool is already designed but partially implemented, so it still requires work and a lot of testing. I am not sure if I'll be able to include it in the initial 1.8.5 version. I won't include it if that means delaying the release too much; in such case, I'll try to release it as soon as possible as an update, hopefully during June/July.
 
That is a great new feature and will attract even more people from the scientific community. Please continue
with the great work, even when some seems to come with strange and weird statements:

bob_franke: "It is unfortunate that the PixInsight developers are too lazy to write complete and easy to understand documentation. You should not be relying on others to write books and provide tutorials."

Please look around in the IT and scientific literature, why are there thousands of books on software such as: Maple, Mathematica, Matlab, ...., Photoshop, Corel Draw, ..., OS X, ... etc.
Are all those companies just to stupid to provide proper and easy to understand documentation. I am sorry, but your statement is pure non-sense.

bob_franke:"The general consensus is that if you don't understand the math... that's your problem. What little help there is, is written at a level only a mathematician can understand."

I personally enjoy the documentation and especially seeing some formulas for deeper understanding the algorithm used in the tool I am applying. I am frequently
prototyping something in JavaScript and the formulas help me guiding.

There is the beautiful statement of Lewin: "Nothing is more practical than a good theory."

Cheers
Thomas
 
Hi Bob,

Several years ago you said?
The concepts of "true color" and "natural color" are illusions in deep-sky astrophotography. Such things don't exist. The main reason is that a deep-sky image represents objects far beyond the capabilities of the human vision system.

So which is it, you seem to have change your mind.

Not at all. I think exactly the same today: color is purely conventional in astrophotography. The best example of this is narrowband imaging, where one has to use an arbitrary color mapping convention or palette in order to represent different wavelengths outside the RGB band as an RGB image. As long as your rendition is consistent throughout the whole image, any palette is valid, although some palettes will allow you to represent the data better than others. The same is true for RGB data. For example, nothing stops you from exchanging the red and blue components if you have a good reason to do so, either from a purely aesthetic perspective, or for the sake of information representation in a particular case.

For conventional RGB color representations, where the R, G and B components are to be represented as red, green and blue colors respectively, any white reference is applicable for the same reason. This is why we provide a large set of selectable white references in PCC, including most spectral types and a number of galaxy types, among other options. However, the choice of a white reference may have a strong impact on the documentary value of the image in our opinion, and this is a very important point for us. We think that no spectral type?including G2V?is suitable as a white reference because, in general, no particular star is representative of the objects being shown in a deep sky image. On the contrary, the integrated light from a spiral galaxy may provide a combined source of all of the existing spectral types and deep-sky objects, which makes it an excellent neutral, unbiased white reference for RGB deep-sky data. An unbiased reference is essential to generate a rendition that can maximize information representation, which is a crucial goal for us. For this reason, the default white reference in PCC has been generated from the average fluxes of Sb, Sc and Sd galaxies, or what we call the average spiral galaxy reference.

Then you said in this announcement?
If you want to persist in making common conceptual mistakes, you will be able to use the G2V spectral type as a white reference?but PCC will allow you to select virtually any spectral type, along with several galaxy types, to calibrate the color of your images automatically and accurately in PixInsight.

I find this statement incredibly arrogant. Who are you to say that your color philosophy is better than anyone else's? Also, you are again stating that PCC provides "accurate" color, which you previously stated does not exist. At least one astrophysicist and many of the best astrophotographers on the planet accept the G2V and/or eXcalibrator methods. eXcalibrator's Linear Regression routine uses stars of multiple colors and gets the same result as the "white-star only" routines.

Sorry if that sounds arrogant to you, but it's just a concise description of what I think. If you prefer, I can prepend an IMHO token to say IMHO, using the G2V spectral type as a white reference for deep-sky images is a common conceptual mistake, or even polish it to say IMHO, using the G2V spectral type as a white reference for deep-sky images is not the best choice, but I am not a big fan of palliative formalisms. In part this is probably because my mother tongue is Spanish. We tend to say things more directly and less sweetened in Spanish. At any rate, my intention has not been to put myself above anybody.

As for the rest of your post, I prefer to not comment more on that.
 
Thank you, Thomas.

There is the beautiful statement of Lewin: "Nothing is more practical than a good theory."

A nice quote, and with many practical applications in PixInsight! ;)
 
For future PCC users, I strongly recommend using the SDSS-DR9 data instead of the APASS whenever possible. The SDSS data are acquired with a 2.5m telescope and the APASS data with a 3.15cm.  Also, the SDSS folks are more experienced. Additionally, the APASS staff have freely admitted that some of their data are problematic.

I recently encountered a Southern Hemisphere field-of-view where the APASS data had obviously highly inaccurate (b-v) values. Also, the Sloan g' and r' filters produced poor RGB color correction. In the Northern Hemisphere, the APASS data have always agreed with the Sloan for color correction.

BTW Juan, will the PCC user interface return the RGB values used for the pixel math? I would like to compare with my results. I have no doubt that the PI routines are more sophisticated than eXcalibrator's white-star and linear regression.  Also, with direct access to plate solving, the PI process will be easier to use.

But will the results provide a significant difference in the final color? It can take a surprisingly large change in RGB factors to be noticeable in the final image.

Like I said earlier, I am a PixInsight user and find the program exceptional and powerful. I'm looking forward to taking PCC for a spin.

Regards,
Bob
 
Hi Juan
Very exciting features for next release


If you need some "direct integrated" images (modded DLSR A7s) with gradients have more images to test new algorithms, let me know, it will be a pleasure to help.


Anyway, again, again, many thanks for your work. 
 
Juan Conejero said:
I still don't dare to anticipate a release date. It should happen during the first half of June...
Looking forward to it  ;)

If you find time, I have a feature request: O:) ... many webserrvice api use content type json. Would be great if the NetworkTransfer could support json as content types as well.

Btw, what is the javascrip member/method for the EnablePasswordMode method of the pcl::Edit class. Couldn't figure it out ...

Klaus
 
Hi Klaus,

Thank you :)

NetworkTransfer::ContentType() will return whatever the server has reported as content type for the latest download operation. Unfortunately, in many cases servers and/or network applications are not properly configured and report invalid content types such as text/plain for JSON data, instead of application/json. There is nothing we can do to solve this.

Edit::EnablePasswordMode() is what you are looking for.
 
By the way, if you are requesting passwords using PCL controls such as Edit, consider calling String::SecureFill() to wipe out the password securely when it is no longer needed. The corresponding string in the server-side control is always destroyed securely when the password mode is enabled.
 
Oh, sorry, I didn't read your post correctly (too many things in my head I guess). The PJSR Edit object also has a passwordMode property:

Code:
Boolean Edit.passwordMode

which works just as pcl::Edit::EnablePasswordMode().
 
Hi Philippe,

Thank you so much.

If you need some "direct integrated" images (modded DLSR A7s) with gradients have more images to test new algorithms, let me know, it will be a pleasure to help.

Having more images for testing is always a good thing, so yes, if you can upload some of them I'll appreciate it ;)
 
Hi Juan,
Juan Conejero said:
Oh, sorry, I didn't read your post correctly (too many things in my head I guess). The PJSR Edit object also has a passwordMode property:

Code:
Boolean Edit.passwordMode

which works just as pcl::Edit::EnablePasswordMode().
Ah, thanks, yes this is what I am  searching for ...

[quote author=Juan Conejero]

NetworkTransfer::ContentType() will return whatever the server has reported as content type for the latest download operation. Unfortunately, in many cases servers and/or network applications are not properly configured and report invalid content types such as text/plain for JSON data, instead of application/json. There is nothing we can do to solve this.
[/quote]
My problem is that I want to send data in JSON form via POST request. With curl I would express it as follows:

curl -H "Content-Type: application/json" -X POST -d '{"user":"xyz", "password":"XYZ"}' url

AFAIK the POST operation of the NetworkTransfer POST method only send form data ... 

 
Hi Klaus,

This is not possible with the current version. I am going to implement support for custom HTTP headers in version 1.8.5 (hopefully I'll upload a new development version for Linux today, along with a new version of PCL).
 
Hi Juan
Juan Conejero said:
I am going to implement support for custom HTTP headers in version 1.8.5 (hopefully I'll upload a new development version for Linux today, along with a new version of PCL).
This is great news! Can I use/download such a development version? Then I can also continue on my telescope pointing stuff ...
 
Back
Top