Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - monkeybird747

Pages: [1]
So somewhere a few updates back it seems like transferring the STF to Histogram Transformaiton tool in order to apply the STF stretch permanently no longer produce results that look like the original STF. Primarily the stars are much, much dimmer. Did I bump a setting somewhere along the way? I'm on Does this on all my PI machines (windows, linux, and mac).

Thoughts are appreciated. The image on the left is the STF view, and the image on the right was permanetly stretched by dragging the STF triangle to the HT tool, removing the STF, and then applying to the image. It looks completely different from the STF version.

Did something change in the program, or am I just more observant now  ???

In reading release notes from Juan I noticed this statement:

“We cannot expect any robust color representation when using narrowband filters, or filters located in the UV or IR wavelength ranges. ”

Would this apply to using a clip in CLS-CCD light pollution filter? I believe it has a uv filter component.

Someone else asked if adjusting saturation evenly across all channels effectively negates the PCC process, but that thread is a year old and unanswered.

Finally, is SCNR necessary if you use PCC? I’m currently applying it prior to PCC.



General / DSLR RGB Ha Combination
« on: 2018 July 02 10:27:13 »
Hello all, I've been reading lots of Pixinsight forum posts and they have helped considerably over the last year of learning this platform. Lately I've been reading posts about combining DSLR Ha and RGB data (in particular this one by MikeOates Mostly I'm looking for advice on getting the two data sets ready for combination.

My RGB data manual workflow (full spectrum T3i with CLS-CCD clip filter-FITS file format):
-Calibration masters following Vincent's PI tutorial
-Calibrate lights-calibrate and optimize master dark selected-detect CFA selected
-Cosmetic Correction using master dark
-Debayer with VNG and RGGB
-Subframe Selector process module (not the script) generating FITS weighting keyword
-Registration using best image from previous step-generate drizzle data selected
-Local Normalization using same reference image as above-default settings
-Image Integration-registered lights+local normal+drizzle files-Generate drizzle, Evaluate Noise-Average, local normalization, FITS keywords-Linear Fit, local normalization, clip low and high pixels
-Drizzle Integration-add drizzle and local normal files-Enable CFA Drizzle-default settings

The questions come in at the Ha preprocessing phase. Some posts suggest using the same workflow as above, registering the Ha master with the RGB master, and then using Channel Extraction to extract the red channel for use with the NBRGBCombination script (200% RGB and Ha scale 4). This is a pretty simple approach, but my calibrated Ha frames look like the master flat has been under or over-applied (dark corners, dark circle in center of frame). However, the extracted red channel looks pretty clean, so maybe not an issue. I've also had some issues registering the Ha master to the RGB master where some of the Bayer patter appears to be visible in parts of the image (and oddly the pattern changes based on the level of zoom).

Another option offered by Mike Oates would be to start with the Debayer and Channel extraction steps, then proceed with calibration as in the above workflow, minus the subsequent debayer and channel extraction steps. Mike uses the Superpixel Debayer, which I have not used before, and doesn't mention drizzle integration. This is where my first question come in: If using the SuperPixel debayer option, can I still do a drizzle integration? Perhaps a bigger question would be if one has a sufficient amount of sufficiently dithered images, are there any conflicting processes or processing steps that would prevent someone from using drizzle integration?

Mike's method also mentions rescaling, resampling, oversampling, and downsampling. These are terms I haven't yet come across in my basic image processing steps. Are some of them interchangeable? How do I know when I need to change an images scale to match the RGB image scale? This appears to be necessary after using the SuperPixel debayer option, correct? When I register the master images is the scaling problem taken care of by the Auto setting of Pixel Interpolation in StarAlignment?

I've seen some reference to adding SplitCFA in the workflow somewhere. Should I be using this instead of Channel Extraction?

Does it matter if I register the master Ha to the master RGB (or RGB reference frame), or should I register the individual calibrated Ha frames to the RGB reference image before integrating them?

Due to some poor use of the project save feature on my part, I lost all my work on my latest HaRGB dslr image when the system crashed during Deconvolution :(. So I thought I would ask these questions before I begin the next run. So what say you Pixinsight gurus? What method will allow me to make best use of the Ha data without adding a bunch of extra noise to my RGB image?

Attached are some los-res jpg screen grabs of the RGB and Ha images I want to combine (couldn't figure out the insert image button). The RGB has be be processed and is non-linear, while the Ha image has not. I show three zoom levels to show the artifact that looks like maybe the bayer pattern? It changes based on the level of zoom. I can add a dropbox link if anyone wants some raw data.


Pages: [1]