RGB Balance

raphi

Member
Hello,

I've almost used up my trial period and although I'm (for now) only doing planetary imaging I really like PI, I think I'm going to keep it. I can do most of the things I do in other software with PI but I am missing one feature for which I still have to open up the buggy and old Registax, and that is RGB Balance. I am not speaking about stretching, that I know how to do but automatically align all RGB channels with a single click as in Registax.

I think it should be possible to do it with pixelmath, Gerald Wechselberger has a Tutorial on that which kinda goes in the right direction (align for blue):
R: $T - (med($T[0])-med($T[2]))
G: $T - (med($T[1])-med($T[2]))
B: $T

This changes the histogram from this:
hist_pre.PNG

To this - it only flattens the curves but they are not aligned:
hist_post.PNG

What I want to achieve is this:
hist_reg.PNG

Can anyone tell me how to do it?

Clear skies!
raphael
 

Attachments

  • hist_post.PNG
    hist_post.PNG
    40.9 KB · Views: 89
  • hist_pre.PNG
    hist_pre.PNG
    42.8 KB · Views: 81
  • hist_reg.PNG
    hist_reg.PNG
    27.2 KB · Views: 86
Okay with some "cheating" I can achieve what I was looking for. I've taken the values which Registax has given for every channel after it balanced the channels (apparently it aligns for green), multiplied the channels in PI with the same values et presto, the RGB channels are balanced and picture looks the same:
rgb_balanced.PNG

The question remains though, how can I do this dynamically with a generic expression and without looking into Registax for the numbers, so that it will work with every image?
 

Attachments

  • rgb_balanced.PNG
    rgb_balanced.PNG
    66.1 KB · Views: 89
how does the result look if you unlink the channels in STF and then compute the STF for the image again? you would have to drag the STF triangle to the HistogramTransformation bottom bar to see the new histogram (top), or apply that HT to the image and the bottom window would show the transformation.
 
The STF never worked for my planetary images, it completely blows out. On the left is the original and on the right after I apply the STF (looks similar with linked or unlinked channels):

stf.PNG


The histogram looks balanced though, but the resulting image after histogram transformation is not what I'm looking for :D:
stf_balanced.PNG


This is my desired result after applying the following pixelmath:
R: $T*0.97
G: $T
B: $T*1.12

rgb_good.PNG


But the values are fixed, which is fine for pictures taken in one session but I had to get the values from Registax and I'd like to have a generic formula without fixed values. I tried to reverse-engineer how Registax is doing it but I failed, I'm still a beginner with pixelmath.

[EDIT] I'm getting very close when I substract the difference of the channels max value from 1 and multiply by that. Visually it looks almost right with the following formula but the histogram shows that it is not perfectly balanced:
R: $T*(1-(max($T[0])-max($T[1])))
G: $T
B: $T*(1-(max($T[2])-max($T[1])))
rgb_max3.PNG


And with resample on:
rgb_max2.PNG


So close... ;)
 
Last edited:
I figured it out after all ?

Because with my camera the blue channel is lower than green and lower than red I need to add the difference between red/blue and green/blue to 1 like this:
R: $T*(1-(max($T[0])-max($T[1])))
G: $T
B: $T*(1+(max($T[0])-max($T[2]))+(max($T[1])-max($T[2])))

The result looks almost the same as with absolute numbers, good enough for me ;)
rgb_generic.PNG
 
I'm slightly uncomfortable about this solution for two reasons.
It is entirely dependent on three single-channel samples (max($T[0]), max($T[1]) and max($T[2]) - call these Rm, Gm and Tm), and this has problems.
Firstly, the "max" operator is a huge noise amplifier - if there is a single pixel with an atypically high value in the specified channel, that's the one you will be using.
Secondly, the max values will (virtually always) be at different pixels. If you are only going to sample one pixel, you really want to sample a white pixel.
Lets just suppose that the samples were all at the same white pixel. Then this calibration would set:
Rm => Rm-Rm*Rm+Rm*Gm = R'm
Bm => Bm +Bm*(Rm+Gm)-2Bm*Bm)=B'm
For white, you need R'm=B'm=Gm, which would only happen by chance.
The obvious calibration using only these samples is R => R*Gm/Rm; B => B*Gm/Bm. Have you tried this? My guess is that if this works, then the same linear scaling using "med" samples will also work, and be much more robust.
 
I don't have any planetary images, but I tried it on a deep space image with strong colour imbalance. Using the "max" operator did not work, but using the "med" operator gave a result that look about right (images attached).
 

Attachments

  • pixelmath.JPG
    pixelmath.JPG
    57.9 KB · Views: 62
  • before.jpg
    789.5 KB · Views: 51
  • after.jpg
    966.9 KB · Views: 49
If you want to batch process lots of images then your options are limited. If you are aiming at manually correcting a few images, I would suggest:
  • use a preview window with "statistics" to find the median values over an "averagely white" region;
  • calculate med(G)/med(R) and med(G)/med(B) for that region;
  • enter these into the "red" and "blue" values in the "manual white balance" frame of the "ColorCalibration" process (leaving "green"=1), and apply to your image (this does exactly the same thing as the Pixelmath above).
 
Thank you for your input and for taking time to look into this. I tried your method but as with every formula where I tried using the median, I cannot get the channels aligned like when I'm using the max value. My formula is derived from a rather try&error approach and does not come from a very deep understanding of how pixelmath works, but the result is very close to perfect alignment like with numbers taken from Registax.

Maybe there's a difference in balancing planetary vs deep sky images? Planeary images are stacks from thousands of single images, each exposed just for ~10ms or less with not flats nor darks. If you PM me your email address I can send you the file of Jupiter I am using if you like, it's 2.2mb and can't be attached therefore.

This is how the histogram looks for Jupiter using median values, blue is slightly mismatched and red is way off :
rgb_med.PNG


About batch processing: I don't know if you've seen my video in the tutorial section but ImageContainer works very well for that purpose, I can load up many pictures and process them in one go.
 
I'm puzzled by your results. Could you possibly post a representative image somewhere - I'd like to work out what's going on!
 
Okay I've uploaded the tiff I've used here: https://filebin.net/96hafx0lyz5voat3

I've stacked the image with AutoStakkert 3, settings I've used during stacking which might have altered the picture although they are pretty standard: RGB Align, Drizzle x1.5 and Debayer. The video .ser file I've used for stacking was recorded with a color camera, ASI290mc and I've used an ADC.

[EDIT] btw for reference, the values according to RegiStax for this picture are:
R: $T*0.95
G: $T
B: $T*1.15

This creates an identical histogram in PixInsight as in RegiStax.
 
Last edited:
OK. The problem with the Pixelmath solution is that med (or mean) are being calculated over the whole image, and the dark background is dominating the result. A better method is using ColorCalibration with a preview window constraining the calculation to the Jupiter image. Thie result is:
1597597711502.png

I guess RegiStax is doing something similar.
 
I think you are on to something and also, the idea of your formula was basically right. After looking into the statistics I changed the formula to use mean instead of median and now the RegiStax and PixInsight histograms are almost identical, only the height of the calculated histogram is a tiny bit less high (half a mm or so) compared to the one with numbers.

Formula:
R: $T*mean($T[1])/mean($T[0])
G: $T
B: $T*mean($T[1])/mean($T[2])

rgb_mean.PNG


With RegiStax numbers:
rgb_num.PNG


I will do some more tests but I think I can use this formula for my future processing. Great, thank you very much for the help (y)
 
Hey Guys,

I am *not* terribly mathematically inclined...so I really could be wrong here. That is the problem with knowing a little- it allows you to fall into traps of thinking you understand something- but actually getting it all wrong.

So with that preface- your machinations above basically look like you are finding the normalizing factor between the filters. This looks like Linear Fit to me. If you perform a Linear Fit on the Red and Blue channels with the Green as a reference- I think you get exactly your result (with the additional benefit of taking care of any pedestal bias in the channels, but that is not important here because Jupiter is so much brighter than the sky).

I want to quickly add- this is NOT the way to properly (photometrically, with color indices or the like) determine the correct weights for each channel. However, since Jupiter is mostly reflected sunlight (white)- the math works in your favor and likely is fairly close. (This is explained in other threads and commented on by Juan as well.)

-adam
 

Attachments

  • Capture.JPG
    Capture.JPG
    170 KB · Views: 45
Ok turns out RegiStax does more than just align for green. I tried with a picture of saturn and the numbers RGB Balance gave me are:
R: 1.02
G: 0.95
B: 1.26

Obviously the formula doesn't work here because we leave green as is and I cannot tell what RegiStax is basing the number for green on. The lines look the same with formula or numbers but the histogram with numbers taken von RS is slightly compressed to the left. I still prefer the formula though because the mean value for all channels is the same (which is not the case for RS numbers):

RS numbers:
rgb_sat_num.PNG


mean formula:
rgb_sat.PNG


@Adam: I'm trying to follow what you did. I've extracted the RGB channels and loaded the R, G and B images into channel combination process. But when I drag the triangle onto the source image, nothing happens and apply global creates an image which looks the same as the original, am I missing a step?

As for the proper way to do it, I completely agree. My naive thinking is that if I can mimic in PI what I can do in RegiStax then it's good enough. Visually I can't tell the difference between RS or PI with the "mean" formula, only the histogram betrays some minor difference. The initial motivation for doing it in PI was because I can batch process many images with ImageContainer and in RS I have to do it image by image. But in the end I get the same result it seems.
 
If I have enough stars in a solvable image I always use PhotometricColorCalibration. That way it deosn't matter what the background colour balance is - it is photometrically calibrated stars that are the reference (you can then pick what white balance reference you want, depending on the image type).
You are quite right that the method proposed above is a simple linear scaling (like all white balance adjustments). The problem is what to chose as the white reference. In the absence of anything else, it is basically assuming that the Jupiter image should "on average" (with a bit of multi-scale filtering) be white. You could not use the same method on Mars!
 
You could not use the same method on Mars!
This is true for RegiStax and Pixelmath, both turn Mars into a blueish planet although the histogram looks right. Which come to think is the reason why I never did RGB balance Mars in RS in the first place:

rgb_mars.PNG


You are right about the whie balance, this is exactly how it is described in the RS manual:
Pressing the Auto-balance function will start a procedure that estimates the best colour mix to get a good white-balance. This is however not always a good way to set the colours. If the estiamate is wrong its probably wise to reset to the original values. The 3 sliders will move each colour-channel to the left-right in the histogram. This is comparable to changing the brightness for a channel. The 3 colourweights will compress/expand the histogram, this is comparable to a using contrast on acolour-channel.
 
@Adam: I'm trying to follow what you did. I've extracted the RGB channels and loaded the R, G and B images into channel combination process. But when I drag the triangle onto the source image, nothing happens and apply global creates an image which looks the same as the original, am I missing a step?

As for the proper way to do it, I completely agree. My naive thinking is that if I can mimic in PI what I can do in RegiStax then it's good enough. Visually I can't tell the difference between RS or PI with the "mean" formula, only the histogram betrays some minor difference. The initial motivation for doing it in PI was because I can batch process many images with ImageContainer and in RS I have to do it image by image. But in the end I get the same result it seems.
[/QUOTE]

Sorry... I did not see you did the LINEAR FIT.
1a. Apply RGB Working space with 1,1,1 for the weights to the color image.
1. Break your image up into RGB panes.
2. Open Linear Fit
3. Choose Green as the reference pane.
4. Drag the triangle on to the Red image and then the Blue image.
5. Using ChannelCombination load these images and apply global.

You will have a different than original result.
-adam
 
"...both turn Mars into a blueish planet although the histogram looks right. "
The histogram doesn't look right, it looks white (thats what you get when all three channels are the same). Mars is reddish - so the histogram should look unbalanced!
 
Back
Top