Well I have this nice data from SRO of the Bubble Nebula and what I ended up with is a nice LRGB image that I now have a good amount of Ha data to add. I was looking at Kayron's Light Votex tutorial on color calibration and it's a good deal different than what I'm used to. One thing I can't seem to wrap my head around is that in the ScreenTrannsferFunction process you have the choice of linking the RGB images for what it perceives the RGB image will look like or you can unselect that icon (looks like a chain) and you get a reasonably well balanced image. See the below screen capture showing both previews. The one on the left is the unlinked version while the right shows the linked. Clearly there is a color bias in the linked image.I opened the Histogram Transformation process and with track view enabled and moving the midpoint to the left you can clearly see that the red and green are shifted to the right of blue. So what is being used to determine the unlinked color combination and how can I emulate that setting? At least that gets me very close and then it can be tweaked as necessary with the Histogram Transformation process per color channel, right?
Following the normal, at least I think it's normal, process of using Color Combine, Dynamic Crop (images dither guided), AutoBackgroundExtraction applied, and BackgroundNeutralization applied, I then try to ColorCalibrate the resulting image. Normally this works just fine. If I have a galaxy picture I use the core of the galaxy and the starless background but with this image there is no clear background area that I can find. It's either dark or light nebula or stars. And using either light or dark area leaves me with very funky colors, mostly very prominent in red with little to no blue/green component that I can see. The tutorial I'm following is doing all these things in the linear state. As you can see below I was able to balance the colors but in the stretched state.
So I guess the reason for all this is to figure out a good way to balance the color in the linear state on images that clearly have no good choice for a good sample of neutral background. What do you do? The image itself is 32.5 hours of data with an additional 15 hours of Ha I'm trying to tastefully blend in.
So far that process is more muddy that I had thought. I should be able to retain the good star colors due to the LRGB data but wanting to enhance the fainter Ha regions using the Ha data.
The use of the ColorCalibration process really confuses me when an image like this is being processed. And another thought, when is it best to use the SCNR process to remove the green if present? Before or after color calibration? Wouldn't after alter the color balance?
And, yes, I get halos in the blue channel on bright stars.
Thanks for bearing with me on this.
Steve