Author Topic: DSLR RGB Ha Combination  (Read 677 times)

Offline monkeybird747

  • Newcomer
  • Posts: 27
    • View Profile
DSLR RGB Ha Combination
« on: 2018 July 02 10:27:13 »
Hello all, I've been reading lots of Pixinsight forum posts and they have helped considerably over the last year of learning this platform. Lately I've been reading posts about combining DSLR Ha and RGB data (in particular this one by MikeOates Mostly I'm looking for advice on getting the two data sets ready for combination.

My RGB data manual workflow (full spectrum T3i with CLS-CCD clip filter-FITS file format):
-Calibration masters following Vincent's PI tutorial
-Calibrate lights-calibrate and optimize master dark selected-detect CFA selected
-Cosmetic Correction using master dark
-Debayer with VNG and RGGB
-Subframe Selector process module (not the script) generating FITS weighting keyword
-Registration using best image from previous step-generate drizzle data selected
-Local Normalization using same reference image as above-default settings
-Image Integration-registered lights+local normal+drizzle files-Generate drizzle, Evaluate Noise-Average, local normalization, FITS keywords-Linear Fit, local normalization, clip low and high pixels
-Drizzle Integration-add drizzle and local normal files-Enable CFA Drizzle-default settings

The questions come in at the Ha preprocessing phase. Some posts suggest using the same workflow as above, registering the Ha master with the RGB master, and then using Channel Extraction to extract the red channel for use with the NBRGBCombination script (200% RGB and Ha scale 4). This is a pretty simple approach, but my calibrated Ha frames look like the master flat has been under or over-applied (dark corners, dark circle in center of frame). However, the extracted red channel looks pretty clean, so maybe not an issue. I've also had some issues registering the Ha master to the RGB master where some of the Bayer patter appears to be visible in parts of the image (and oddly the pattern changes based on the level of zoom).

Another option offered by Mike Oates would be to start with the Debayer and Channel extraction steps, then proceed with calibration as in the above workflow, minus the subsequent debayer and channel extraction steps. Mike uses the Superpixel Debayer, which I have not used before, and doesn't mention drizzle integration. This is where my first question come in: If using the SuperPixel debayer option, can I still do a drizzle integration? Perhaps a bigger question would be if one has a sufficient amount of sufficiently dithered images, are there any conflicting processes or processing steps that would prevent someone from using drizzle integration?

Mike's method also mentions rescaling, resampling, oversampling, and downsampling. These are terms I haven't yet come across in my basic image processing steps. Are some of them interchangeable? How do I know when I need to change an images scale to match the RGB image scale? This appears to be necessary after using the SuperPixel debayer option, correct? When I register the master images is the scaling problem taken care of by the Auto setting of Pixel Interpolation in StarAlignment?

I've seen some reference to adding SplitCFA in the workflow somewhere. Should I be using this instead of Channel Extraction?

Does it matter if I register the master Ha to the master RGB (or RGB reference frame), or should I register the individual calibrated Ha frames to the RGB reference image before integrating them?

Due to some poor use of the project save feature on my part, I lost all my work on my latest HaRGB dslr image when the system crashed during Deconvolution :(. So I thought I would ask these questions before I begin the next run. So what say you Pixinsight gurus? What method will allow me to make best use of the Ha data without adding a bunch of extra noise to my RGB image?

Attached are some los-res jpg screen grabs of the RGB and Ha images I want to combine (couldn't figure out the insert image button). The RGB has be be processed and is non-linear, while the Ha image has not. I show three zoom levels to show the artifact that looks like maybe the bayer pattern? It changes based on the level of zoom. I can add a dropbox link if anyone wants some raw data.


Offline aworonow

  • PixInsight Addict
  • ***
  • Posts: 247
    • View Profile
    • Faint Light Photography
Re: DSLR RGB Ha Combination
« Reply #1 on: 2018 July 12 06:46:07 »
I'd like to address just one thing to simplify your tasks...don't drizzle.
For drizzle processing to be of any use, you have to meet two criteria: 1) if your seeing is, say 2 arcseconds, they your pixel resolution must be at least 3 times that (ie 6 arcseconds). Anything less will not improve your resolution, and even 3x is probably will not show any obvious difference. 2) if you drizzle and double your image edge dimensions, they you are increasing the noise/pixel by a factor of 2. So, to drizzle you need and get the same noise level as per no drizzle, you need 4x as many images. And even then the noise becomes correlated across adjoining pixels--generally a problem.
I have not dealt with CFA images, but the array of colored pixels is always blended into each pixel, in some manner or another. In the process, adjoining pixels share information (and noise). I do not know if or how that might affect drizzle. But it is probably antithetical to the assumptions behind drizzling, I suspect.

good luck...Alex

Offline monkeybird747

  • Newcomer
  • Posts: 27
    • View Profile
Re: DSLR RGB Ha Combination
« Reply #2 on: 2018 July 12 19:33:11 »
Thanks for the reply Alex. Thats an interesting take on drizzling. In my limited time in this field I've been working under the assumption drizzling was well justified for DSLR images, supposing a sufficient dither was used during capture.

For the below image I used Mike Oates' technique for the Ha image, and then combined using the NBRGBCombination script before going linear. I was not able to use CFA drizzle following Mike's steps, but did do a normal drizzle, despite the fact that I didn't have enough images to justify its use for the Ha data (more of an experiment). The results were ok, but I don't know if I really gained anything by debayering and splitting channels first, as opposed to processing normally and then splitting channels after integration. My instrument may not be fine enough to detect the difference.

Here is a side-by-side, from left to right, of the fully processed RGB (had to fix some star cores later), the integrated Ha, and the fully processed combination of the two. Feedback welcome, although its a low-res screen shot. I think the RGB can stand on its own, but it did gain significant detail from the Ha channel.