New in PixInsight 1.8.5: PhotometricColorCalibration

Juan Conejero

PixInsight Staff
Staff member
Hi everybody,

Further news on the next version of PixInsight. Today we have been working on an exciting new tool: PhotometricColorCalibration (PCC). We have a first working version with really outstanding results that I want to share with you all.

PCC performs automatic plate solving and photometry (both aperture and PSF photometry are selectable) to calculate RGB white balancing factors based on measured star fluxes, with respect to a user-selectable white reference. Photometric data are retrieved online from the APASS survey through the VizieR server and mirrors. The new PCC tool uses internally the latest versions of the ImageSolver and AperturePhotometry scripts, authored by Andr?s del Pozo and Vicent Peris. Vicent is also the author of the internal PCC calculation algorithms. I am writing the C++ implementation, making the most of the new functionality of version 1.8.5 that allows us to execute JavaScript scripts from PCL-based C++ code to write hybrid PixInsight modules. This means that PCC is a regular PixInsight tool, not a script, with all its inherent benefits.

Here are a few examples that Vicent and I have been preparing to show you the kind of results you may expect from PCC.

First a wide-field DSLR image of the Milky Way, courtesy of Georg Viehoever:


On the left, the original RGB debayered image. On the right, the result after applying PhotometricColorCalibration. The good news is that PCC is really easy to use. If your image has approximate center coordinates and image scale metadata (as FITS header keywords), PCC is basically a one-button tool in most cases (or, speaking in pure PixInsight terms, a one-blue-triangle tool). The next example is a two-frame mosaic of the region around NGC 2080, in the Large Magellanic Cloud (data by Vicent Peris):


This is an image of the LBN 552 region acquired with the 1.2 m Zeiss telescope of Calar Alto Observatory:


The original combined RGB image, shown on the top-left, had already been white balanced using the average color of the stars in the frame as white reference. The newly calibrated image with PCC is on the bottom-right of the screenshot. The difference speaks by itself.

The last example is an image of M51, also acquired through the 1.2 m telescope of Calar Alto Observatory:


After applying PCC, the image shown on the right has been stretched nonlinearly with HistogramTransformation, processed with HDRMultiscaleTransform, and its color saturation has been increased with CurvesTransformation.

Besides its underlying high-accuracy astrometric and photometric analysis implementations, the most innovative and powerful feature of our new PCC tool is, in my opinion, the fact that it allows you to select one among numerous predefined, carefully generated white references. By default, PCC applies a white reference based on the average of the characteristic fluxes of Sb, Sc and Sd galaxies. This reference, which has been used in all of the examples shown above, is in our opinion truly representative of the deep space, and hence an unbiased, neutral white reference quite close to our documentary color philosophy. If you want to persist in making common conceptual mistakes, you will be able to use the G2V spectral type as a white reference?but PCC will allow you to select virtually any spectral type, along with several galaxy types, to calibrate the color of your images automatically and accurately in PixInsight.

A huge kudos to Andr?s del Pozo and Vicent Peris, who are the authors of the excellent implementations and algorithms behind the new PhotometricColorCalibration tool. Thank you for your continued support and contributions, which make PixInsight an exciting platform in constant evolution.
 
I'm really looking forward to this... and also not spending an hour per image using PixelMath to overwrite satellite trails  :)
 
Hi Juan (and, as always, Hi to all those who have made these significant contributions),

Yet again PixInsight raises the bar in the field of astroimage processing, with yet another totally scientific and statistical apporach to the solution of a problem that has, so often, previously been attacked 'artistically' or 'subjectively'.

As Harry says, whilst half the fun of PixInsight is spending hours fiddling with sliders and buttons whilst trying to figure out what combination best suits the image, if this truly is a 'one-blue-triangle' approach, then at least it will give us more time to fiddle with those pesky settings in all the other processes  :)

Thanks again.
 
This is amazing! I was just working on something along these lines, using the AperturePhotometry script!

Now, my motivation was driven by a DSLR issue with color calibration, that stems from the fact that the CFA has no clean frequency cut-offs and there is significant cross-talk between color channels. Proprietary digital development software inside the camera processors, and in software like adobe?s camera raw and dcraw, partly correct this by means of a color calibration matrix that maps the camera raw data onto some standard color space (ie, XYZ). This is camera and illumination dependent. Although one can find sources for such matrix data for different camera models and illuminations,  these are not useful with modded DSLRs in astroimaging situations. Hence, I was working on the idea of using catalog photometric data to determine such matrix via a multilinear regression approach, including pedestals in the model that could take care of background neutralization at the same time:

[RGB]_xyz = M [RGB]_raw + [RGB]_bias

where the elements of the 3x3 matrix M, and 3x1 vector [RGB]_bias, are obtained by fitting the detected data to the catalog photometric data.

Is this something that could be accomplished by the new module, or white balance will be handled in the standard three (RGB) multiplicative factor?

best,
Ignacio
 
Hi Juan!

That's an exiting announcement.

Just a question - don't be angry:
"Photometric data are retrieved online from the APASS survey - do you also think of including the SDSS-DR9 catalog?

Best regards!

Herbert, Austria
 
Thank you so much to all.

But "One Button " where's the fun in that

Don't worry, Harry, the PCC tool will have lots of parameters to control the astrometric solution and the photometric analysis, along with the background reference, so you'll have plenty of numbers to play :) However, the tool will work fine with default parameters in most cases, if the image has the required metadata.

...not spending an hour per image using PixelMath to overwrite satellite trails

Hi Rick. You'll *never* have to do this again in PixInsight. The new large-scale pixel rejection feature of ImageIntegration works flawlessly. It works so well and is so robust, that you'll be able to integrate a data set simply ignoring all plane and satellite trails, etc. Just as if they didn't exist in virtually all cases.
 
Hi Ignacio. Thank you.

[RGB]_xyz = M [RGB]_raw + [RGB]_bias

The current (first) version of the PCC tool computes scaling factors for the individual RGB components. The mean background is sampled and evaluated through robust statistics from a user-defined region of interest, just as the current BackgroundNeutralization tool does now. However, your idea looks very interesting, and I think it could be implemented without problems. I also think your approach is sound and should work well.

The PCC tool will be released as an open-source product (under PCL license), so it will be available at our open-source GitHub repositories, and hence open to collaborations. Our PCL development framework has all of the necessary resources to implement what you want (and in case it lacks something, we can implement it). This would be an excellent improvement.
 
Hi Herbert,

Thank you.

do you also think of including the SDSS-DR9 catalog?

The new PCC tool uses the latest release of the APASS survey as its source of photometric data by default. However, PCC is very flexible, so you can use basically what you want. You can select any catalog currently supported by the AperturePhotometry script (which, by the way, should be renamed to Photometry IMO, since it now performs both aperture and PSF photometry).

As I have said before, PCC will be released as an open-source module, so it will be open to collaborations from external developers. With the new version 1.8.5 of PixInsight, we'll release almost all of the new tools as open source products, including LocalNormalization (previously announced as FrameAdaptation), SubbandBlending, and of course the new features that will be available in ImageIntegration and DrizzleIntegration, among many others. I am betting very hard on open source releases, as the best way to reinforce the required dynamism on the PixInsight platform.
 
so what's the timeframe like for release of 1.8.5? i have a boatload of my usual crappy LP data that i have been holding off processing because i'd like to try the new normalization stuff.

rob
 
The final name is still subject to discussion :) It will be a new tool to generate seamless mosaics using, among others, the sub-band blending (SBB) algorithm. SBB was first described, AFAIK, by P.J. Burt and E.H. Adelson in A Multiresolution Spline With Application to Image Mosaics. My implementation introduces significant changes and uses powerful multiscale techniques that we already have implemented in PixInsight, instead of Gaussian pyramids, but follows the same basic idea. SBB is also used with great success in well-known panorama generation applications, such as AutoStitch for example.
 
so what's the timeframe like for release of 1.8.5?

I still don't dare to anticipate a release date. It should happen during the first half of June. Version 1.8.5 is a very complex release, with very significant changes to the platform (as usual, much more and much deeper changes than what the user will see on the surface), many new tools, and many and very important improvements and new features.

For example DrizzleIntegration can now work with CFA monochrome data directly (direct Bayer drizzle), which involves a completely redesigned drizzle data format (the new XML-based XDRZ format, which includes LZ4-compressed rejection maps and local normalization data). Another example of complexity is the possibility to write hybrid modules that mix C++ code with JavaScript scripts (PhotometricColorCalibration is a good example). There is also a new HTML5 painting engine available on the JavaScript and C++ runtimes. etc, etc...

The new version 5.8.0 of Qt is fantastic, but as usual, it generates new problems and incompatibilities with our code base, which I have to fix. I already have a 1.8.5 version working extremely well on Linux, which I am using to concentrate all of my work on the new tools and improvements for now. So the macOS and Windows versions still require a lot of work. So I ask for (even more!) patience; 1.8.5 will be the best, most powerful and most stable version of PixInsight ever.
 
Herbert_W said:
Just a question - don't be angry:
"Photometric data are retrieved online from the APASS survey - do you also think of including the SDSS-DR9 catalog?
Hi,
AperturePhotometry can already use SDSS-R8. Supporting the release 9 should be very easy.
 
Juan Conejero said:
so what's the timeframe like for release of 1.8.5?

I still don't dare to anticipate a release date. It should happen during the first half of June. Version 1.8.5 is a very complex release, with very significant changes to the platform (as usual, much more and much deeper changes than what the user will see on the surface), many new tools, and many and very important improvements and new features.

OK, that's enough precision - was just wondering if it was "soon" or late in the year, or what.

rob
 
Juan, you said...
This reference, which has been used in all of the examples shown above, is in our opinion truly representative of the deep space, and hence an unbiased, neutral white reference quite close to our documentary color philosophy.

Several years ago you said?
The concepts of "true color" and "natural color" are illusions in deep-sky astrophotography. Such things don't exist. The main reason is that a deep-sky image represents objects far beyond the capabilities of the human vision system.

So which is it, you seem to have change your mind.

Then you said in this announcement?
If you want to persist in making common conceptual mistakes, you will be able to use the G2V spectral type as a white reference?but PCC will allow you to select virtually any spectral type, along with several galaxy types, to calibrate the color of your images automatically and accurately in PixInsight.

I find this statement incredibly arrogant. Who are you to say that your color philosophy is better than anyone else's? Also, you are again stating that PCC provides "accurate" color, which you previously stated does not exist. At least one astrophysicist and many of the best astrophotographers on the planet accept the G2V and/or eXcalibrator methods. eXcalibrator's Linear Regression routine uses stars of multiple colors and gets the same result as the "white-star only" routines.

I am a PixInsight user and believe it is an exceptional and powerful program. But I find the arrogance of the developers and many in the user community to be astounding. The general consensus is that if you don't understand the math... that's your problem. What little help there is, is written at a level only a mathematician can understand. This is not the general astrophotography community.

It is unfortunate that the PixInsight developers are too lazy to write complete and easy to understand documentation. You should not be relying on others to write books and provide tutorials.

Regards,
Bob
 
Bob

I respect your opinion now respect mine, you come across sounding very mean to me. I suspect to your friends and family you are a very nice man though.

I bought PI almost 5 years ago with my eyes wide open as far as the documentation situation. I still purchased it and don't regret it. I understood when I bought it that the documentation would most likely always remain that way. I am still fine with that because I see great value in my purchase. The PixInsight team has a business model that seems to be working fine for them, you might do it different and I might do it different but it is their business. The users who write tutorials do it because they want to. I believe you write tutorials for others software also so are those developers lazy too?

Also when I purchase software I don't care if the developer 'sounds' arrogant, what their stated opinions are, I care does the software do the job I expect of it. I inspect the goods before purchasing and I decide. If the answer is yes then the developer can 'say' the moon is bright green it doesn't change the fact that the software does what I need it to do. If it doesn't I don't buy it and move on. It's very simple but I am very simple. 



Mike
 
Hello everybody,

Thank you very much for your comments. At a personal level, this tool represents an important milestone to me. I have been working on my own color theory for almost ten years from now. As you may know, my work for the ALHAMBRA survey was the key to develop this theory. This theory is based on these simple statements:

[list type=decimal]
[*]Color in astrophotography has a documentary goal.
[*]The nature of the objects we are photographing is not limited to the human spectral sensitivity, so the latter is no the right choice to represent the scene.
[*]Color representation is not relative to the human eye, but to the object, following always a documentary goal, thus establishing a direct link between the object's nature and its color display in the picture.
[*]There isn't a unique valid color representation for a given picture, each one being relative to a different documentary goal.
[*]A face-on spiral galaxy model should be a good white reference in most cases since it represents very well the vast majority of objects we depict in deep-sky photography. This is based on the fact that this galaxy type contains a good representation of all the different stellar populations.
[/list]

As you know, we implemented points 1 to 4 in PixInsight through the ColorCalibration tool. But, up to now, it was not possible to demonstrate point 5 since in most cases you are not able to reference the color to a face-on spiral galaxy. My hypothesis was that a spectral model of an intermediate spiral galaxy could serve as a kind of "universal" white reference in astrophotography.
Up to now, PixInsight hadn't a solution to demonstrate this hypothesis. So I do think this new tool is an important milestone for PixInsight as well. Right now, you have all the freedom to choose your white balance in PixInsight: you can choose a source in your own picture, or you can choose an absolute reference.

This is the aesthetic point of view, which we feel contributes to the artistic development of the user community. It's never arrogant to establish you point of view. We simply don't like a sun-like star as white reference because we think it doesn't represent the nature of the photographed object. Moreover, we understand that we should give the user the freedom to choose his/her own white reference, so we included the G2V spectra, together with almost any kind of star spectral type to make your choice. In the future, we'll be adding more white reference models to the list.


Best regards,
Vicent.
 
Vicent, I agree that a face-on spiral galaxy makes a nice white reference.

However, what if there is intervening galactic extinction? How do you show a galaxy with its intrinsic color and still correctly display the foreground stars?

I think it's better to show the foreground stars correctly and let the color of the galaxy include the extinction. I call this the caf? doctrine... that is Color As From Earth. Or probably more correctly, color from Earth orbit. :)

Also, I am a bit fuzzy with your basic color theory. Can you give us a definition of a "documentary goal"?

Regards,
Bob
 
Back
Top