I recently imaged the Bubble Nebula and the nearby open cluster in Ha, L, and RGB. I have 90 minutes Ha, 30 minutes L, and 20 minutes each RGB (RGB binned 2x2).
My processing approach was to combine the luminance and Ha (in Pixel Math, L + 3*Ha turned out best), enhance that image (Curves, HDR wavelet, and A Trous wavelet) to make a master luminance frame. I then added the Ha to the red, ultimately doing an LRGB combine with the data from each color.
The result had the typical washed out red look. I have done my best to fix it with color balance changes, but the red still really isn't red. A copy is attached and you can see the
full sized image in my web gallery.
I started to try the "A New Approach to Combination of Broadband and Narrowband Data" method from the tutorial, with the intent of making a better red frame to drive the red color. But when I tried to create the continuum map, I get a very strange result. The center of bright areas is dark. If I have used an integer upsample for the red, some pixels are at zero. With a smoother upsample, they are close but not at zero.
To create the continuum map, I used pixel math (Red / Ha) and rescaled the result. I have a attached a small image showing the artifacts in the image.
Is this due to starting from a 2x2 and trying to build a map from an upscaled image? Are there any normalizations I should try before building the map (I am starting from the raw combined data to create the map).
Any suggestion would be appreciated.