Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - joelkuiper

Pages: [1]
Wish List / Sensible presets for subframe selector
« on: 2019 January 31 13:03:10 »
Now this might be somewhat outside the philosophy of Pixinsight, but I often find myself sorting and copy+pasting values from the (amazing!) new subframe selector into the tool provided here to get some reasonable expressions. It would save me a ton of time if it were possible to express variables in the formulas (like min/max FWHM from the measured output) so I can save the generic formula, or even better yet ... have some of these expression preloaded as selectable defaults.

Gallery / Re: NGC 691 & Friends (a not often seen pair)
« on: 2017 November 11 01:09:52 »
Joel, that's impressive sharpness and small stars! Large aperture? What your imaging resolution?

Thanks! It's the QHY23 on a 12" Newton with an ASA reducer, which gives a pixel scale of about 0.628 arcsec/pixel, the M110 image was drizzle upsampled by 2x, so that gives roughly half (0.314 arcsec/pixel). The Newton is slightly out of collimation though, so we'll have to fix that. The stars were reduced a bit using MorphologicalTransform with a star mask to hide some of the miscollimation (in general I don't really like doing that, but in this case it worked out)

Gallery / NGC 691 & Friends (a not often seen pair)
« on: 2017 November 10 04:57:31 »


Using PhotoMetricColorCalibration. The framing is a bit off I think, but there seems to be some nice red shifted galaxies in the background, that might be worth exploring with a NIR filter.

Bonus, M110


Gallery / Re: MBM54 and NGC7497
« on: 2017 November 10 03:24:59 »
Wow, that's just amazing! Great processing on bringing out the dust lanes

Gallery / Re: Pelican Nebula - HA 120min
« on: 2017 November 10 03:23:33 »
Amazing! I love the subtle contrasts, well done  :D

Adding quite a substantial pedestal (0.43) helped. I think I know what's going on, the QHY23 has two read modes. One for shorter exposures, and one for longer exposures. This could possibly have changed the calibration. After adding the pedestal, both the gradient and the blot were effectively corrected. Still a strange thing!

I have an odd problem, my flats seem to be overcorrecting and I can't seem to fix it. They are Sky Flats (morning twilight) taken on the same day as the lights.

However, when I apply them they seem to somehow overcorrect the data.
I aimed for an ADU of 30k, but they're closer to 40k. Might be slightly overexposed, but not tremendously I think?

Flat (XISF, dropbox)
Light (XIFS, dropbox)

Any thoughts on whats wrong with my flats, and how to avoid this problem in the future (or ideally, somehow "fix" the flat?). I tried subtracting a fixed value from the flats (e.g. a pedestal) but I couldn't seem to get it right.

Note that this problem is /much/ worse with narrowband data, even though the flats were also shot with that filter.

SII Flat
SII Light

Full stack of flats (~700MB, very large!). And, also master bias for good measure. For NB data it seems that working without flats is preferred over using them in this setup, but I'm stumped as to why.


I've given this a little bit of thought, but as an initial pass it could be possible to:
- Extract stars and fit PSFs
- Measure PSFs, and apply a k-means clustering algorithm
- Fit a k-dimensional surface to indicate which PSF belongs to which part of the image, each dimension in k maps to a average PSF of that cluster
- Iteratively run deconvolution for each dimension (e.g. "layer") using the fitted surface as a mask

so for a dimensionality of k=5, the PSFs extracted from the image would be clustered in 5 different buckets. The average PSF of those buckets is computed. Since you know which PSFs belong to that bucket, you know the coordinates for the stars that correspond to that PSF. Then you can fit a surface (e.g. 2D surface spline) to indicate where that PSF belongs in the image. Then it's only a matter of applying deconvolution for each layer, masking out the areas that don't belong. To make sure each part of the image only gets deconvoluted once, it probably requires unit normalization on the columns. Although instead of fitting a surface for each layer, it might also be possible to segment the image, but that might introduce other artifacts I think.

... now I'm not much of a mathematician, so this might make absolutely /no/ sense … but curious about the ideas.

EDIT: Of course this is a poor mans substitute for multichannel/multiframe blind deconvolution as described in "V Zhulina, Yulia. (2006). Multiframe blind deconvolution of heavily blurred astronomical images. Applied optics. 45. 7342-52. 10.1364/AO.45.007342" (and their references to similar techniques) … but for the method above I might have a some idea on how to implement it :P 

Gallery / Re: LDN 673
« on: 2017 October 06 08:35:15 »
Very nice! And not often photographed, so that's refreshing as well!

Gallery / Some of my exclusively PI edited frames
« on: 2017 October 03 04:58:54 »

Deer lick group
Bubble Nebula HaRGB
Bubble Nebula in SHO
NGC 6951 with IFN
Pelican Nebula (IC 5070) in SHO

Been only doing this for a year and half now … and boy it's tricky. But I love Pixinsight and would love to learn more of the features available! 


Drizzle integration requires well dithered undersampled images, but methods exist for recovering (perceptual) sharpness using blind (multi frame) deconvolution. These would be an absolutely amazing feature for PI, especially since lucky imaging setups with CMOS cameras (e.g. moving beyond the classical dozen or so frames into the realm of thousands) become more common.

Looking forward to this! One thing I noticed is that sometimes the clipping params were decent for normal ImageIntegration, but not when subsequently doing upsampling with DrizzleIntegration, will this new setting integrate with the Drizzle workflow?

Pages: [1]