Author Topic: Balancing RGB Prior to Combine -- M33  (Read 9732 times)

Offline andyschlei

  • PixInsight Addict
  • ***
  • Posts: 157
    • http://www.obsballona.org
Balancing RGB Prior to Combine -- M33
« on: 2008 September 29 18:10:05 »
Over the last two weekends, I collected 15 minutes of data on M33 -- 60 minutes L, 30 each RGB.  This was with an ST-10 and an NP-101.

I think the processing went fine until the color combine.  The first LRGB combination was very yellow.  There were also unequal color gradients in the background (the first set of images were taken with a quarter Moon in the sky).  I solved the problem for the most part (I think there are still color problems) and I did it in two ways.

First, I stretched the blue image to brighten it up.  I was using visual comparison to match the RGB frames and a lack of match is the problem.  So here is my question:  Is there a better, perhaps a scientific way of balancing the brightness of the three color frames prior to combining them?  This solved 80% of the color problem.

The second method I used to fix the color was to pull the image into Photoshop and use Gradient Xterminator and color balance to get the final fix.

For the differential gradients, I was thinking of combining the RGB before flattening the background with Dynamic background extractor and pixel math subtraction.  Is that a good idea?

The second place I may have introduced a problem is with my galaxy/star mask.  I created a galaxy mask using wavelets and curves, and combined it with my star mask using pixel math max() function.  I used the mask when I applied wavelets to the galaxy.  I think the edges dropped off too sharply, as the background very quickly fades to black.  But I'm not sure.

The full image is on my web site.  And clicking the image below should bring it up too.

Help and comments greatly appreciated.



--Andy

P.S., I hope someone from the team will be at AIC in San Jose in November.  You've got a great tool, everyone needs to hear about it.
Observatorio de la Ballona
CDK 12.5, NP-101, C-11
AP-1200, AP-900
ST-10 XME, CFW-8, Astrodon v2 filters
Pyxis Rotator, TCF Focuser

Offline Jack Harvey

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 975
    • PegasusAstronomy.com & Starshadows.com
RGB Combine
« Reply #1 on: 2008 October 06 07:33:02 »
Andy I do not stretch the R,G,B prior to combine.  SOme people use a G2V calibration on a G2V star for the a  particular camera.  You can google and find a ref to that I suspect.  Or go here http://www.starshadows.com/documentation/index.cfm?DocID=6

I do not believe in G2V myself and use pretty much a 1:1:1 ratio and ajust color in the final RGB.  I usually then combine the L with the RGB after the RGB combine.  I suspect the moon being up was a big issue.

Yes I will be at AIC.  See announcements section of the PixInsight FOrum
Jack Harvey, PTeam Member
Team Leader, SSRO/PROMPT Imaging Team, CTIO

Offline Jack Harvey

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 975
    • PegasusAstronomy.com & Starshadows.com
G2V
« Reply #2 on: 2008 October 06 07:36:26 »
Jack Harvey, PTeam Member
Team Leader, SSRO/PROMPT Imaging Team, CTIO

Offline Jack Harvey

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 975
    • PegasusAstronomy.com & Starshadows.com
AIC
« Reply #3 on: 2008 October 06 07:39:01 »
Sorry was not in the Announcements

The 2008 Advanced Imaging Conference (AIC) will be held in November in San Jose. California USA.

http://www.aicccd.com/flash/index.html

There will be workshops on PixInsight on Friday November 14, the day prior to the General meeting.

http://www.aicccd.com/flash/2008_speakers.html#jharvey

Just a FYI for the group<G>
Jack Harvey, PTeam Member
Team Leader, SSRO/PROMPT Imaging Team, CTIO

Offline andyschlei

  • PixInsight Addict
  • ***
  • Posts: 157
    • http://www.obsballona.org
Re: RGB Combine
« Reply #4 on: 2008 October 10 01:19:40 »
Jay,

Quote from: "Jharvey"
Andy I do not stretch the R,G,B prior to combine.  SOme people use a G2V calibration on a G2V star for the a  particular camera.  You can google and find a ref to that I suspect.  Or go here http://www.starshadows.com/documentation/index.cfm?DocID=6

I do not believe in G2V myself and use pretty much a 1:1:1 ratio and ajust color in the final RGB.  I usually then combine the L with the RGB after the RGB combine.  I suspect the moon being up was a big issue.

Yes I will be at AIC.  See announcements section of the PixInsight FOrum


When I speak of stretching the RGB, I mean bringing the image to visibility.  I calibrate the image in MaximDL, then align and combine the subframes in CCDStack.  The FITS from either CCDStack or Maxim require stretching to visibility in PixInsight.  The problem is how does one bring different images to visibility with consistent results?

--Andy
Observatorio de la Ballona
CDK 12.5, NP-101, C-11
AP-1200, AP-900
ST-10 XME, CFW-8, Astrodon v2 filters
Pyxis Rotator, TCF Focuser

Offline Jack Harvey

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 975
    • PegasusAstronomy.com & Starshadows.com
RGB
« Reply #5 on: 2008 October 10 13:00:23 »
Sorry I misunderstood.  Using CCDStack for instance you have a couple of options.  One you can combine the R,G,B in CCDStack and save that RGB as a scaled tiff which will be immediately visible in PixInsight.  Or you can save the combined RGB in CCDStack as either a raw tiff or just a fits.

You can then use the screen transfer function to bring the raw image to visibility BUT not change the image (much like the screen view tool in Maxim)-a nondestructive stretch.  Or you can use the Histogram tool sequentially to set the black and white points and stretch the image to visibility.

I am not sure if this is what you are asking?
Jack Harvey, PTeam Member
Team Leader, SSRO/PROMPT Imaging Team, CTIO

Offline andyschlei

  • PixInsight Addict
  • ***
  • Posts: 157
    • http://www.obsballona.org
Re: RGB
« Reply #6 on: 2008 October 12 12:01:15 »
Quote from: "Jharvey"
Sorry I misunderstood.  Using CCDStack for instance you have a couple of options.  One you can combine the R,G,B in CCDStack and save that RGB as a scaled tiff which will be immediately visible in PixInsight.  Or you can save the combined RGB in CCDStack as either a raw tiff or just a fits.

You can then use the screen transfer function to bring the raw image to visibility BUT not change the image (much like the screen view tool in Maxim)-a nondestructive stretch.  Or you can use the Histogram tool sequentially to set the black and white points and stretch the image to visibility.

I am not sure if this is what you are asking?


No, let me try to restate the question.

First, I'll explain my current processing technique, tor there may be problems there.  

I have found that I get the best results with combining in CCDStack, but doing all subsequent image processing in PixInsight.  I have not had good results saving from CCDStack as a scaled image, and had only OK results with color combining in CCDStack.  So I want to start with the RGB frames and process and combine them in PixInsight.

Based on advice received here in a prior post, I save from CCDStack in 32-bit integer FITS format.  On opening in PixInsight, I use the rescale function to use the whole dynamic range.  I then use histogram stretch to bring the images up to appropriate visibility.  I can usually do this in one pass:  Since I have used rescale, I don't touch the highlights end, I primarily use the midpoint stretch, changing scale on the histogram so I can see the histogram better, and I finally will move the black point up but taking care not to clip more than 0.1% of the pixels.

So this gets me visible images, using the full dynamic range, that I can perform my other processing steps prior to LRGB combine.  For the RGB subframes, I generally only address gradients.  All sharpening / wavelet / etc. is done only to the luminance frame.

So now the problem.  How do I correctly balance the brightness of the RGB subframes so I get good color balance?  Each subframe is stretched as described above, but the only way I make them similar in brightness is by visual comparison.  Is there a more precise way of balancing the relative brightness of the RGB subframes so that I get correct (or just good) color?

I hope this more complete problem statement helps or reveals another error in my approach.

If it would help to post any data I'd be happy to, or to provide screenshots to describe the process.

Thanks,

--Andy
Observatorio de la Ballona
CDK 12.5, NP-101, C-11
AP-1200, AP-900
ST-10 XME, CFW-8, Astrodon v2 filters
Pyxis Rotator, TCF Focuser

Offline Jack Harvey

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 975
    • PegasusAstronomy.com & Starshadows.com
RGB
« Reply #7 on: 2008 October 12 13:12:20 »
Hmmm  Lets see if I am closer.  I rely on CCDStack to combine my master R,G, & B frames.

CCDStack has the ratio of 1.00:0.90:1.00 for the RGB color combine ratios.  These are those that work for me but others use G2V to get the ratios.  I then save the RGB combined color image as a raw fits.  I bring this raw fits into PixInsight as you have described.

I do NOT individually adjust the histograms to try to "optimize" the RGB ratios of the individual R,G,B frames which CCDStack has has set.

Any color adjustment from this is done on the last steps of processing and usually involves the RGB
and not R,G, or B frames individually.  It is rare that I find I have to tweak the midpoint of one of the colors in the final histogram, but on occasion I have done this - but only on the RGB at the end.

I hope I am helping here, but may be making it worse?  Maybe Juan or someone else can jump in and
clarify this.
Jack Harvey, PTeam Member
Team Leader, SSRO/PROMPT Imaging Team, CTIO

Offline andyschlei

  • PixInsight Addict
  • ***
  • Posts: 157
    • http://www.obsballona.org
Balancing RGB Prior to Combine -- M33
« Reply #8 on: 2008 October 12 14:23:42 »
Jack,

So perhaps the answer is just do the color combine in CCDStack.  Do you then separate the channels in PixInsight to do the LRGB combination?

Although I'd be interested to know how one might approach this with a purely PixInsight solution.

Thanks,

--Andy
Observatorio de la Ballona
CDK 12.5, NP-101, C-11
AP-1200, AP-900
ST-10 XME, CFW-8, Astrodon v2 filters
Pyxis Rotator, TCF Focuser

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Balancing RGB Prior to Combine -- M33
« Reply #9 on: 2008 October 12 15:21:56 »
Hi Andy,

Quote
On opening in PixInsight, I use the rescale function to use the whole dynamic range.


I guess this is the problem, as Jack has also pointed out. If you rescale individual RGB channels separately, then the three channels will no longer be referred to the same numerical range of coordinates, and hence any resemblance to the original color balance will be purely coincidental.

Think of a RGB color image, not as three unrelated grayscale images put together, but as a single image where each pixel is a vector in a 3-D space (the RGB space in this case). The three components of each vector must be referred to the same system of coordinates for the image to make sense as a whole.

So the correct procedure would be something like this:

1. Open the individual channel images produced by CCDStack. The 32-bit integer format is an excellent choice, since in this way you avoid all the problems related to undefined numerical ranges in floating point FITS images.

2. We assume that the channel images are linear at this point. Combine the R, G and B images into a single linear RGB color image with ChannelCombination. If you have to stretch the color data, use this combined image, but don't apply different nonlinear functions to each component. In general, there is no reason to clip the highlights of the histogram, but if you do so, do it equally for the three RGB components.

3. If you have a separate luminance image, which is also linear, you can process it before performing a LRGB combination. Keep in mind that most image restoration techniques should be applied to linear images. Deconvolution and RestorationFilter, in particular, must always be applied to linear images. ATrousWaveletTransform, DBE, and many noise reduction algorithms, also work a lot better when the image is linear. The ScreenTransferFunction (STF) tool will help you to see the image without needing to stretch it.

4. If you have a separate luminance, you can use the LRGBCombination tool to combine L with the combined RGB image from step 2. However, LRGBCombination requires stretched (hence nonlinear) RGB and L images. This is because this tool works in the CIE L*a*b* space, which is a strongly nonlinear (human vision adapted) color space.

If you want to preserve the linearity of the color and luminance data into a single combined image, an alternative to LRGCombination is as follows:

- Open the RGBWorkingSpace tool and define the value of Gamma equal to one. You must disable the "Use sRGB Gamma Function" option to change Gamma. The luminance coefficients are not critical here, but for DS images a good choice is usually setting all of them to one, so each component has the same weight to calculate luminance. Apply this instance of RGBWorkingSpace to both the linear RGB image and the linear luminance image.

- Extract the X and Z components of the CIE XYZ space from the linear RGB image with the ChannelExtraction tool. Do not extract the Y component.

- Combine X and Z with the linear luminance, using it as the Y component of CIE XYZ, with ChannelCombination.

In this way you have a linear color image where the linear luminance has replaced the original Y component of the RGB data. Since the CIE XYZ space is linear and the working RGB space uses a linear Gamma function, the linearity of the data has not been altered at all. Now you have a linear color image that is a YRGB combination. Cool, isn't it? 8)

The basic ideas are:

- Don't stretch individual RGB components separately because this will destroy any existing color balance.

- If you work with previously calibrated RGB components -as is the case with CCDStack-, achieving good color balance is usually a matter of background neutralization. As Jack has said, doing this by adjusting the histograms (by setting different white points, or applying different midtones balance values), is in general a bad idea.

- Try to preserve the linearity of the data, as much as possible. Linear images are much easier to handle, and there are algorithms that cannot work properly with nonlinear images.

- Use ScreenTransferFunction to work with linear images comfortably.

- Perform a nonlinear stretch (HistogramTransformation, CurvesTransformation) only when you know that you no longer need linearity. Usually, this happens at the final stages of the entire processing work.

I hope this will help you. Let us know if you have more doubts, or if you disagree with anything.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline andyschlei

  • PixInsight Addict
  • ***
  • Posts: 157
    • http://www.obsballona.org
Balancing RGB Prior to Combine -- M33
« Reply #10 on: 2008 October 12 18:06:57 »
Juan,

Wow, this is going to require some thinking through.  I think I get it, but it is certainly different, as I have always stretched the image to visibility before working on it.

To be clear, any stretch of the image makes it "non-linear" because it applies a non-linear transformation to the data, correct?  Would the rescale function do this as well?  As I understand it, it just places the full range of values into the available values of the format without clipping.  It clearly must be done to a combined image, because all colors would need to be re-mapped in the same manner.  But would it ruin the linearity and make use of wavelets or other processes impossible (or create poor results)?

I think I'm going to go back and start reading Gonzales and Woods again...   :!:

Thank you very much for the help.

--Andy
Observatorio de la Ballona
CDK 12.5, NP-101, C-11
AP-1200, AP-900
ST-10 XME, CFW-8, Astrodon v2 filters
Pyxis Rotator, TCF Focuser

Offline Jack Harvey

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 975
    • PegasusAstronomy.com & Starshadows.com
RGB
« Reply #11 on: 2008 October 12 18:19:08 »
I knew Juan could explain it<G>
Jack Harvey, PTeam Member
Team Leader, SSRO/PROMPT Imaging Team, CTIO

Offline andyschlei

  • PixInsight Addict
  • ***
  • Posts: 157
    • http://www.obsballona.org
Balancing RGB Prior to Combine -- M33
« Reply #12 on: 2008 October 12 19:09:31 »
In a very quick take at reprocessing, I think I have some data problems that I need to investigate.  In the 32-bit image, the blue has 6 pixels at max value, and then none for 80% of the dynamic range.  There are also various color anomalies (noise) that should have been cleared up by the sigma data reject.

The first stab I took at combining the original RGB frames had a serious color problem -- with the blue.  So I've got to find out what that problem is.

Nonetheless, Juan's description on processing is very helpful, even if it makes my head spin a bit.  That's why this is fun.

Clear skies,

--Andy
Observatorio de la Ballona
CDK 12.5, NP-101, C-11
AP-1200, AP-900
ST-10 XME, CFW-8, Astrodon v2 filters
Pyxis Rotator, TCF Focuser