Author Topic: Help with pre-process workflow for Ha subs with DSLR  (Read 2308 times)

Offline wildbillinMT

  • Newcomer
  • Posts: 7
Greetings Everyone:  I am relatively new to PI and this is my first post in this forum.  I have pre-processed and post-processed conventional color RGB color images from my CentralDS cooled Canon 600D DSLR in PI in the past, but I need some expert help with the proper workflow for correctly processing some Ha subs captured with this same DSLR.  My telescope is a Takahashi Epsilon E130-D f/3.3 Astrograph.

Last night I captured a series of light frames using my DSLR.  Concurrent with the light frames, I also captured appropriate darks and bias frames and have a file of flats that can be used in the pre-processing workflow. 

Each of the raw light frames shows up as red image.  The end result I want to achieve is a B&W image that is typical for all Ha images that one customarily sees on most astronomy websites /imaging forums.   

I have attached one of the 180 second raw subframes in .jpg file format so you can see an example of the typical light subframes that were captured.  The target is NGC6820/6823 region in Vulpecula that contains the Elephant's Trunk Nebula.   I did capture both 90 second and 180 second lights in CR2 file format that can be used for stacking to get a final image to post-process.

So, what progressive workflow would you recommend?   Please be specific avoiding expert "jargon" that may not be clear to a virtual newbie like myself.   An outline of the specific PROCESS steps to use in PI would be helpful.  Alternately, if there is a clear video tutorial out there somewhere that covers this exact topic, that would be helpful as well.

Many thanks in advance for your kind assistance with this.

Bill B in Montana

Offline pfile

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 4729
Re: Help with pre-process workflow for Ha subs with DSLR
« Reply #1 on: 2016 July 14 14:39:02 »
there are a few ways to do this. the simplest is to follow the regular flow, being careful to select "SuperPixel" as the debayering method. other debayering methods will work, but they of course interpolate the data for the missing pixels in the bayer matrix. there are some debayering methods that look at, say, the G channel while processing the R channel, which would be a disaster here (no real signal in G which would then corrupt R). however i don't think PI implements any of those methods.

if you want to get fancy, you can try to split out the red channel early in the flow so you're not carrying around the G and B channels, unnecessarily costing disk space and processing time.

one method is to calibrate only, then use the BatchChannelExtraction script to pull out the red channel of each calibrated light.

a more advanced method is to use the SplitCFA process to extract the individual R, G, G, B images from all your input files (lights, flats, darks, bias) and then throw away the G,G,B images and proceed with calibration, registration and integration of the R images as though they came from a mono camera. however, this will require the use of ImageContainer which would allow SplitCFA to run as a batch process.

rob