This is what I did, but there would normally be other steps before making non-linear such as deconvolution and a lot more time spent. You don't include the image that you obtained so I don't know how my example compairs.
The process I did was...
Integrate the three images with no rejection to make a luminance image.
Manually HistogramStretch by starting with the SFT, but before applying, I moved the shadow and midtone sliders apart to reduce contrast, which reduces apparent noise. And I think this is the crucial stage, I have only just come to this conclusion that perhaps I was over stretching my images.
LinearFit with Ha as reference.
PixelMath to combine the colour images:
R: SII_DBE*0.5+ 0.5*Ha_DBE
G: 0.15*Ha_DBE + 0.85*OIII_DBE
B: OIII_DBE
BackgroundNeutralization with a reference image of a small preview of the background and set Upper limit to 0.003
Manual HistogramStretch as with the luminance.
Noise reduction with ACDNR on the colour image as below:
![](http://www.mikeoates.org/pi/acdnr_joelshort.jpg)
Make a copy of the luminance, use convolution of 2 to smooth it. Then make more contrasty with Curves to produce a SN mask which is then applied to the colour image.
With this mask use Curves to increase saturation in the highlights, then invert the mask and decrease the colour saturation in the background but not as much.
Using the same SN mask apply to the luminance. At this point one would perform sharpening etc but for now I just increased the contrast and used HDRMultiscaleTransform.
Now do a LRGBCombination and final contrast tweaks. I know this could be improved a lot, possibly with a little noise reduction on the luminance and/or on the final image.
The full image is here: http://www.mikeoates.org/pi/final_joelshort.jpgI hope this helps.
Mike