Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - monkeybird747

Pages: 1 [2]
16
Thanks Rob. Incidentally, in reading the stats on this filter it also boasts "near perfect color balance" for modified DSLR. For me there is still a significant blue-green color cast using this filter. Easily fixed during PCC, but still I'm not sure what their idea of near-perfect balance is, or if its my understanding of what they mean that is flawed.

Here is a link to my latest M31 image using PCC. There were no color tweaks other than increased saturation on total image. I was hoping to get close to what Juan refers to as "documentary" color.

https://astrob.in/366999/0/

17
General / Re: Image Stacking after a meridian flip
« on: 2018 September 17 11:41:13 »
I use auto meridian flips and never move/rotate the camera after a flip. I take flats after the imaging session is complete and feed it all in to PI without even thinking about it. Works great! But yeah, if you rotated your camera independently of the OTA after the flip you'd have to do as above with your flats. I think the computing time of star alignment takes a little longer with images that are 180 out from each other, but PI powers through with no problem. (And yeah, it's wicked cool!)

18
General / Re: HDR - Individual sets against fully stacked version
« on: 2018 September 17 11:34:07 »
I just finished an HDR Composition of Andromeda a few days ago. The tutorial I read (LVA) had you calibrate, integrate, and DBE the individual exposures, then use the HDR Composition routine on the resulting integrated images. For me I only had 300s and 60s frames, so I used HDRComp on two fully preprocessed and integrated images. I suppose in this fashion you would lose the benefit of stacking more images and the associated gains in SNR. I'm not 100% on that part though. For a target like this that may not be much of a consideration.

For Andromeda I had to significantly reduce the Binarizing setting to get the amount of masking of the core I wanted. The default settings only masked a tiny dot at the center of the core, and I wanted to replace more of the saturated core with the 60s exposures. I also bumped the smoothness up to 20 or so. All this had the effect of including a handful of the most saturated stars in the mask, replacing them with the less saturated versions of the 60s exposures. This was my first attempt at this process, so please take with a grain of salt. After this was photometric color calibration using Sb galaxy profile. I saved the HDR Multiscale Transform for near the end of processing. Final version linked below.

https://astrob.in/366999/0/

19
I’m not sure where I got the UV part. It’s not mentioned specifically on the Astronomik website. For some reason I thought this was part of the filter, and needed for a fully modified DSLR. What you say about the statement referring to pass filters makes sense though.

20
In reading release notes from Juan I noticed this statement:

“We cannot expect any robust color representation when using narrowband filters, or filters located in the UV or IR wavelength ranges. ”

Would this apply to using a clip in CLS-CCD light pollution filter? I believe it has a uv filter component.

Someone else asked if adjusting saturation evenly across all channels effectively negates the PCC process, but that thread is a year old and unanswered.

Finally, is SCNR necessary if you use PCC? I’m currently applying it prior to PCC.

Thanks,

MB747

21
General / Re: Inserting Ha into an RGB Image
« on: 2018 July 14 09:48:07 »
Ive run into this where my Ha data is not drizzled because of a lower sub count than my OSC master, which is drizzled. I’ve only done it a few times, but I did resample the Ha to match the RGB before star alignment. Plus there was a superpixel debayer involved with the Ha. I’ll try just the star alignment next time to see what happens.

22
General / Re: RGB star color preservation with NB
« on: 2018 July 13 06:59:40 »
There is this tutorial that uses a good star mask and MorphologicalTransformation to erode/remove the unwanted stars from the NB image.

http://www.lightvortexastronomy.com/tutorial-reducing-star-sizes.html#Section3

You could try that before combining using one of the above mentioned methods. I'm going to try this on my next processing attempt with some OSC + Ha data. My first attempt shows some of what you are talking about, i.e. some star color loss that is hard to get back. Here is a screen shot of, from left to right, RGB fully processed, Ha integration, RGB+Ha fully processed.




23
General / Re: Inserting Ha into an RGB Image
« on: 2018 July 13 06:32:28 »


if the Ha requires upsizing then the two images have not been registered to one another.

rob

This touches on a similar question I asked in a similar thread. So registration does indeed take care of differences in scale between the two images to be registered? I've seen other workflows that mention both resampling and star alignment, but it seems one would negate the the other it terms of image scale.

24
General / Re: Inserting Ha into an RGB Image
« on: 2018 July 13 06:05:09 »
There is also the NBRGBCombination script. I use it for osc and ha combination.

25
General / Re: DSLR RGB Ha Combination
« on: 2018 July 12 19:33:11 »
Thanks for the reply Alex. Thats an interesting take on drizzling. In my limited time in this field I've been working under the assumption drizzling was well justified for DSLR images, supposing a sufficient dither was used during capture.

For the below image I used Mike Oates' technique for the Ha image, and then combined using the NBRGBCombination script before going linear. I was not able to use CFA drizzle following Mike's steps, but did do a normal drizzle, despite the fact that I didn't have enough images to justify its use for the Ha data (more of an experiment). The results were ok, but I don't know if I really gained anything by debayering and splitting channels first, as opposed to processing normally and then splitting channels after integration. My instrument may not be fine enough to detect the difference.

Here is a side-by-side, from left to right, of the fully processed RGB (had to fix some star cores later), the integrated Ha, and the fully processed combination of the two. Feedback welcome, although its a low-res screen shot. I think the RGB can stand on its own, but it did gain significant detail from the Ha channel.


26
General / Re: Mapping HA into R Channel
« on: 2018 July 04 11:13:13 »
Farzad, I've been using the NBRGBCombination script. Have you tried that for getting Ha into Red Channel?

27
General / DSLR RGB Ha Combination
« on: 2018 July 02 10:27:13 »
Hello all, I've been reading lots of Pixinsight forum posts and they have helped considerably over the last year of learning this platform. Lately I've been reading posts about combining DSLR Ha and RGB data (in particular this one by MikeOates https://pixinsight.com/forum/index.php?topic=5748.0). Mostly I'm looking for advice on getting the two data sets ready for combination.

My RGB data manual workflow (full spectrum T3i with CLS-CCD clip filter-FITS file format):
-Calibration masters following Vincent's PI tutorial https://www.pixinsight.com/tutorials/master-frames/index.html
-Superbias
-Calibrate lights-calibrate and optimize master dark selected-detect CFA selected
-Cosmetic Correction using master dark
-Debayer with VNG and RGGB
-Subframe Selector process module (not the script) generating FITS weighting keyword
-Registration using best image from previous step-generate drizzle data selected
-Local Normalization using same reference image as above-default settings
-Image Integration-registered lights+local normal+drizzle files-Generate drizzle, Evaluate Noise-Average, local normalization, FITS keywords-Linear Fit, local normalization, clip low and high pixels
-Drizzle Integration-add drizzle and local normal files-Enable CFA Drizzle-default settings

The questions come in at the Ha preprocessing phase. Some posts suggest using the same workflow as above, registering the Ha master with the RGB master, and then using Channel Extraction to extract the red channel for use with the NBRGBCombination script (200% RGB and Ha scale 4). This is a pretty simple approach, but my calibrated Ha frames look like the master flat has been under or over-applied (dark corners, dark circle in center of frame). However, the extracted red channel looks pretty clean, so maybe not an issue. I've also had some issues registering the Ha master to the RGB master where some of the Bayer patter appears to be visible in parts of the image (and oddly the pattern changes based on the level of zoom).

Another option offered by Mike Oates would be to start with the Debayer and Channel extraction steps, then proceed with calibration as in the above workflow, minus the subsequent debayer and channel extraction steps. Mike uses the Superpixel Debayer, which I have not used before, and doesn't mention drizzle integration. This is where my first question come in: If using the SuperPixel debayer option, can I still do a drizzle integration? Perhaps a bigger question would be if one has a sufficient amount of sufficiently dithered images, are there any conflicting processes or processing steps that would prevent someone from using drizzle integration?

Mike's method also mentions rescaling, resampling, oversampling, and downsampling. These are terms I haven't yet come across in my basic image processing steps. Are some of them interchangeable? How do I know when I need to change an images scale to match the RGB image scale? This appears to be necessary after using the SuperPixel debayer option, correct? When I register the master images is the scaling problem taken care of by the Auto setting of Pixel Interpolation in StarAlignment?

I've seen some reference to adding SplitCFA in the workflow somewhere. Should I be using this instead of Channel Extraction?

Does it matter if I register the master Ha to the master RGB (or RGB reference frame), or should I register the individual calibrated Ha frames to the RGB reference image before integrating them?

Due to some poor use of the project save feature on my part, I lost all my work on my latest HaRGB dslr image when the system crashed during Deconvolution :(. So I thought I would ask these questions before I begin the next run. So what say you Pixinsight gurus? What method will allow me to make best use of the Ha data without adding a bunch of extra noise to my RGB image?

Attached are some los-res jpg screen grabs of the RGB and Ha images I want to combine (couldn't figure out the insert image button). The RGB has be be processed and is non-linear, while the Ha image has not. I show three zoom levels to show the artifact that looks like maybe the bayer pattern? It changes based on the level of zoom. I can add a dropbox link if anyone wants some raw data.


MB

Pages: 1 [2]