Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - mmnb

Pages: [1] 2 3 4
PCC = Photometric Color Calibration

Did you also LinearFit channels before combing into RGB?

Some basic workflow on lightvortex:

EDIT: Maybe just to make sure, might be worth detailing steps you did to get image just in case there is a mistake in there. E.g. did you calibrate w flats?

Just a quick sanity check, can you post the RGB image (not LRGB) after PCC? Is there any color contrast in it?

Best I could do.  Deconvolution (really not sure if it did a whole lot), denoising, masked stretch, HDRMT (seemed to work like magic). Saturation adjustments (tried to really punch red selectively for the red hydrogen filaments. Is this way off in any way?  The weak signal in the background has some shapes that line up with the IFN in better data, tried to be honest in keeping a little contrast but very faint.

Would love to see if someone else can do much better.

Ah one important thing that I missed (and was clearly stated in the article) was that you need to check the deconvolution result with stretching.  After applying MaskedStretch to the deconvoluted image I can see some effects that aren't visible with the default STF.  The enhancement to the image seems relatively small to me in this case nonetheless.

Took some image with my C14 Hyperstar + EOS 60Da (32x30s IRRC) and trying to take the drizzled and pcc'd linear image through deconvolution/denoising and stretching. I think I am reasonably proficient with star masks now (I do miss a few double stars....seems like a lot of trouble to refine to get those last bits, is this typical?) and tried to create an appropriate deringing support as advised here:

No matter what I do, I can't seem to to improve the image with deconvolution.  I can see "too much" where it looks like it is being stretched around noise or what looks like next to nothing. I can't see the same sorts of visual contrasts that are in the tutorial.  An important issue:

When I create the external PSF, the star profile is not round; a brutally honest reminder that the collimation is off (I've been trying to fix, but having a hard time).  Does the poor collimation affect how deconvolution will perform? E.g. Will it try an model collimation error in the underlying model that is based on atmosphere?

Are there any other image issues that stand out? Note I only correct with bias and flat due to the way the Canon does darks.

I've included masks in the zip file (1G):

Edit: Added a screenshot of a blind mask stretch and HDRMT.  Nice to see that there is some comparable detail in the data...I wouldn't know much more than to do pre/post denoising in the dark and a little bit of saturation and brightness adjustment.  Should I be able to do much better with this data? I swear I see a bit the IFN when I compare to other pictures, but it is quite faint and not sure if I can successfully pull it out.

Gallery / Re: coma cluster - C14 Hyperstar EOS 60Da
« on: 2019 July 17 09:03:53 »
Gah.  That collimation error is really bugging me too, good to know that it is so clear to others (sometimes the PI eccentricity readings don't look too bad).

Really tried to fix (using circular baffle on bright star etc.) but couldn't get anywhere, looks like I'll have to try again.

Gallery / coma cluster - C14 Hyperstar EOS 60Da
« on: 2019 July 16 17:14:12 »
This is was I did with 32x30s exposures of the coma cluster.  Standard preprocessing (no darks because of how the canon camera darks are hard to measure) up to pcc.  Just did two gentle passes of denoising with multiscale linear transform, I tried using a blurry luminance mask but I could always see "noise seam" in the galaxies so I denoised without masking.  Adaptive stretch and some curve bumping in saturation and lightness.  What I tried was looking at was the face on spiral and try to be sure I wasn't loosing the "S" in my processing.  Does anyone see any obvious problems in the image?

Great info. I did notice that the lines in the PCC graphs were somehow un-compelling fits.  Thank you for taking the time to look at the image and articulate this important advice.

Too much green in the first one. Nudged G and B curves, just a bit brighter too.

Finally have my hyperstar setup working. My stars suffer from some eccentricity, hopefully will be resolved after tuning collimation.  I also had some backlash in my focusing that must've affected things.

Not sure how I did here.  After calibration and PCC, mostly did some very gentle denoising, deconv, stretching and really just bumped saturation curve.  Other images have better hue contrast between the core and arms, but somehow this looks ok to me.  I noticed a bad blue cast after integration, I fixed with LinearFit.  Not sure how that happened (blue was weaker in flats?) so I don't think I lost any color contrast but a little worried since MaximDL's RAW Color images (automatically debayered, balanced) seem to suggest I would see more.

Would love any constructive crit. 500MB linear xsif after drizzle int and PCC:

General / Overscan newb questions
« on: 2018 October 15 18:11:08 »
I just got a camera (QHY 16200A) with overscan capabilities, wanted to check my understanding to see if I have this right:

Previously I created a master bias according to the Light Vortex tutorials with a camera that did not have overscan.  I should be able to use the overscan capability to do better bias correction for an image.  Because the bias is dependent on the state of the (non-cooled) components in the camera, overscan provides better correction than a master bias frame from an average of a bunch of rapidly acquired bias images.

The bias for a given image is determined by taking the overscan region, averaging columns and fitting a smooth representation of bias *along* the columns.  This is done for *all* images (lights, darks, flats).  The master bias (now the zero frame) is also corrected this way.  The zero frame is still subtracted from darks, flights and flats the way the original master bias would have been.  All of this is done simply by specifying the overscan regions in BatchPreProcessing.

Is that right?  I am not sure what the 'target' region is in the overscan region specification.  Does using the overscan region provide materially better accuracy than simply making a master bias out of a quickly acquired set of bias frames?

Here is how I did by the end.  Some problems during acquisition:

-Temp was high (the SBIG 402ME could only get to -10C for most of it)
-Moonlight was bad giving a high background
-Focuser screws came loose, causing a change in camera angle

I just started autofocusing between subs, and my stars look a lot better than previous attempts.

This is the image using just RGB and excluding L.  There is a lot of data in L (19 exposures, 5min each) but no amount curve adjustment etc. would make the central galactic bulge 'fade out' to the background....the change would always be abrupt.  Simply taking the PCC'd rgb image and doing a bit of curve adjustment got a much better result (where the bulge looks more correct).



If anyone can do much better with the data, I'd love to know how you did it.  I presume the thin profile make it difficult to do a whole lot.

I could replicate successfully running ImageSolve first with the parameters in the earlier screenshot and then PCC works (didn't realize that ImageSolve coordinates stick to the image and PCC will use those).

(thanks to everyone for looking at this)

Odd. I think I've copied the PCC settings faithfully and I still fail.

I should have made the xisf link clearer, apologies.  Didn't realize that the pinpoint routine is robust enough to work in 8-bit depth with the screenshot.

There is no header as the RGB image is the output of several steps, I didn't think the header "carries" as you process. A link to an original raw image (that has coords etc.):

Pages: [1] 2 3 4