Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - ChoJin

Pages: 1 2 [3] 4 5 ... 8
to close this topic, here are the result with better masking.

Thanks a lot for the help.

I'm only doing 10 iterations because I don't really like how the artefacts look like with more iterations.

I'm attaching a crop, without deconvolution, deconvolution with my old masking/settings, and the deconvolution with the new masking/settings.

I'm now indeed masking the background with a stretched/black clipped version of the image, and I slightly adjusted the global dark parameter (going from 0.005 to 0.0065).

Thanks again.

if I may, pushing this to a github repository would make browsing the code easier (and for you, it'd be easier to push fixes/updates)

Gallery / Re: LRGB Images with Atik Horizon Mono Camera
« on: 2018 February 14 02:00:13 »
they look quite nice indeed :)

(do you have a photo of that giant telescope?  O:) )

Me again, I'd need your expertise/advices  O:)

Here are two larger field of view, one without deconvolution, and one with the 10 iterations (which seems to be close to the sweet spot to prevent from too many spaghetti artefacts).

given these two screenshots, would you keep the deconvolution and work from there, or would you skip/drop the deconvolution? (and more generally, what do you think about the results I'm getting with my deconvolution?)

deconvolution seems to slightly improve the results, but maybe I would get the same result with denoising and contrasts later on without the risk of added artefacts? what do you think?

I know I could try to fork the processing and try both with and without deconvolution, but my computer is really really slow, I'd therefore like to prevent from taking useless paths if possible ;-)

Wish List / connected component filtering
« on: 2018 February 12 01:17:40 »

one (morphology) feature which would be very useful IMHO to filter starmasks very easily is having a connected component filter to filter connected components which are {small, bigger, within a range} of X pixels.

playing with the wavelet layers isn't always giving the results we'd hope (e.g. left-overs of bigger structures) to filter structures based on their size, whereas if we generate a binary starmask, we could afterwards very easily filter the results using a connected component filter (size-based: smaller/bigger/range).

yeah, maybe i shouldn't use deconvolution at all on M31 (big object, drizzle...)...
I'm not sure anymore, but I don't know how to objectively decide that from the data. Is there any metric I could use?

thanks, I'll try playing with the global deringing parameter, but I'm afraid to lessen the deconvolution improvements (which are already quite small). I'll give it a try.

I'm already using a global mask to protect the core of the very/saturated bright/big stars (the ones which have a tendency to have donuts/dark cores). For the general background protection I'm just using the wavelet regularization parameters as often advised in many tutorials.

As a side note, it's might not be obvious from the screenshots, but this is a quite bright area i'm showing here, it's very close to the main dark lanes of M31. Thus, a background mask wouldn't help I think.

General / Re: Pixel size / Photometric Color Calibration
« on: 2018 February 11 23:11:00 »
yes :)


I'm trying to apply a deconvolution to my L channel (synthetic, linear) (right after DBE and color calibration) and I'm getting those famous spaghetti artefacts.
Please find attached 3 (very close-up, original resolution is w=10850 h=7210) screenshots. One without deconvolution, one with 10 iterations and one with 50 iterations.

As you can see on the 50 iterations, there are spaghetti "donuts" around stars. On the 10 iterations it's not that bad I guess.

What I don't understand is why I'm getting those spaghetti and how to prevent from them. Even with 20 iterations i'm already seeing them, whereas in most tutorial, ppl are running 40-50 iterations without issues? (I'm also far from getting the drastic improvements i'm seeing in most tutorial, but that's another matter. Maybe deconvolution doesn't like drizzle?).

I fine-tuned my parameters on small previews with 10 iterations. Global deringing set to 0.005, local deringing enabled (the spaghetti are not coming from the local deringing, I double-checked I'm still getting them without this feature enabled), and wavelet regularization enabled.

Maybe I'm too picky... after all, given the resolution of the image, when I'm not zooming that much, I can't see those artefacts at all, but I'm afraid those will get amplified later on during the non-linear parts. And maybe they will show if I'm printing the image at some point?

Anyway, any help/hints/thought about that matter would be appreciated.

General / Re: Pixel size / Photometric Color Calibration
« on: 2018 February 11 08:56:50 »
I think you should, and with Drizzle, this is the opposite.

General / Re: color cast with Photometry Color Calibration and M31
« on: 2018 February 10 13:01:30 »
somehow, disabling BN within PCC and doing it separately just afterward yields to proper results.

General / color cast with Photometry Color Calibration and M31
« on: 2018 February 10 09:17:27 »

I've been trying for hours to understand why i'm getting a huge color cast when trying the new PCC feature with my M31 data, to no avail.

I attach a screenshot of the same data processed with different setting with PCC compared to a normal color correction flow (BN+CC using the whole frame as white reference). PCC is using the same preview bbox for the BN.

I start from data integrated with localnormalization, and I only applied DBE afterward.

Once color corrected (either with BN+CC or PCC) I applied HT using the auto-STF (linked) setting, followed by SCNR (100% average neutral) and finally I (over)boost the saturation using LRGBCombination with its own L channel.

For some reasons, when I do the color correction with PCC, applying auto-STF (linked mode) yields to a huge color cast (it looks way better if I unlinked the channel).

With the classical BN+CC, I get a green cast which is killed once SCNR is applied, but with the data processed with PCC, I don't get similar results at all as you can see.

I tried to change the saturation threshold in PCC down to .25 (also from my data it seems ideally it should be set to 0.21 but I can't, the minimum is 0.25 in the UI), but I got the same results.

Am I doing something wrong or is it a corner case where BN+CC would be better?

Any help/expertize would be greatly appreciated.

IMHO, having implemented a bunch of gpgpu algorithms, cuda is a bad choice.

OpenCL would be more appropriate: it works with any gfx cards and you'll get the same performance.

General / Re: deconvolution and masking
« on: 2017 June 25 03:14:11 »
Thank you very much, that's very helpful as usual  :)

Unfortunately my data has quite a low SNR (only 12min exposure due to guiding nightmares, but I'm still trying to get what I can out of my data). I'll try anyway to use the wavelet parameters to prevent from having to use a mask for the background.

I have a question about your starmask tutorial:
why are you setting a small scale factor to 1 and compensation to 2 instead of 0? You don't want to apply the deconvolution to the small stars either (and reduce them slightly)?

General / Re: deconvolution and masking
« on: 2017 June 24 02:16:57 »
Alejandro, just to make sure we're on the same page.

On most tutorials, they use a L mask to protect the background and only apply the deconvolution where there's data (the bright part usually).

If I understand correctly, what you're saying is that I should generate a starmask for the big stars (especially the ones with spikes) and subtract those from the L mask before applying it (hence excluding the big stars from the deconvolution as well)?

Intuitively that's what I would do, but I want to make sure you're saying the same thing.

Pages: 1 2 [3] 4 5 ... 8