Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - ChoJin

Pages: [1] 2
1
Gallery / M31 DSLR
« on: 2018 March 12 04:52:00 »
Yet another M31 photo. My childhood dream.

This is the first light of my Skywatcher Quattro 250mm/1000mm f4 with my unmodded Canon 6D.
This is also my first astrophotography with my own gear.

Taken from the Aubrac region in France on August 2016, which has a lovely dark sky. No pollution filter.

It took me forever to process, starting over from scratch 3 times. I spent so many hours on it, I only see the flaws... I guess it's time to stop and let it go ;-)

1h59 with 2 and 5min subs.

All the details can be seen on the description from my flickr.

https://www.flickr.com/photos/-chojin-/38946513840/

2
PCL and PJSR Development / parallel processing
« on: 2018 February 23 11:35:45 »
Hello,

I'd like to implement a new process module, and for that I'd like to know which process module would be the best example regarding parallel processing and how it should be best done (best practice/coding-wise within the PCL framework).

NB: I know how to do parallel algorithms, my question is specific to how to do it within PCL for a process module.

3
General / How/when to combine synthetic lum
« on: 2018 February 23 11:24:33 »
Hello,

I'm trying to figure out the best way to combine a synthetic lum.

My synthetic lum is computed using ImageIntegration (noise weighting) from my RGB data (DSLR) after DBE+PhotometricColorCalibration+BackgroundNeutralization+SCNR.

I did some processing on this synthetic lum (deconvolution+noise reduction) and now I'm trying to figure out how/when to merge back, with my RGB data, this processed synthetic lum without messing up my color calibration.

I stumbled upon https://pixinsight.com/forum/index.php?topic=2485.0

and I'm not sure which method is the best/less hazardous. Should I stretch both my Lsyn and RGB, Linearfit my Lsyn and LRGBCombine them, or should I go with the linear CIE XYZ space method?
Which one is best (for SNR, color balance etc.)? What are the advantages and drawbacks of both methods?

Hopefully someone can help shed some insights on this matter, because I'm having a hard time deciding :o

4
Wish List / connected component filtering
« on: 2018 February 12 01:17:40 »
Hello,

one (morphology) feature which would be very useful IMHO to filter starmasks very easily is having a connected component filter to filter connected components which are {small, bigger, within a range} of X pixels.

playing with the wavelet layers isn't always giving the results we'd hope (e.g. left-overs of bigger structures) to filter structures based on their size, whereas if we generate a binary starmask, we could afterwards very easily filter the results using a connected component filter (size-based: smaller/bigger/range).

5
Hello,

I'm trying to apply a deconvolution to my L channel (synthetic, linear) (right after DBE and color calibration) and I'm getting those famous spaghetti artefacts.
Please find attached 3 (very close-up, original resolution is w=10850 h=7210) screenshots. One without deconvolution, one with 10 iterations and one with 50 iterations.

As you can see on the 50 iterations, there are spaghetti "donuts" around stars. On the 10 iterations it's not that bad I guess.

What I don't understand is why I'm getting those spaghetti and how to prevent from them. Even with 20 iterations i'm already seeing them, whereas in most tutorial, ppl are running 40-50 iterations without issues? (I'm also far from getting the drastic improvements i'm seeing in most tutorial, but that's another matter. Maybe deconvolution doesn't like drizzle?).

I fine-tuned my parameters on small previews with 10 iterations. Global deringing set to 0.005, local deringing enabled (the spaghetti are not coming from the local deringing, I double-checked I'm still getting them without this feature enabled), and wavelet regularization enabled.

Maybe I'm too picky... after all, given the resolution of the image, when I'm not zooming that much, I can't see those artefacts at all, but I'm afraid those will get amplified later on during the non-linear parts. And maybe they will show if I'm printing the image at some point?

Anyway, any help/hints/thought about that matter would be appreciated.

6
General / color cast with Photometry Color Calibration and M31
« on: 2018 February 10 09:17:27 »
Hello,

I've been trying for hours to understand why i'm getting a huge color cast when trying the new PCC feature with my M31 data, to no avail.

I attach a screenshot of the same data processed with different setting with PCC compared to a normal color correction flow (BN+CC using the whole frame as white reference). PCC is using the same preview bbox for the BN.

I start from data integrated with localnormalization, and I only applied DBE afterward.

Once color corrected (either with BN+CC or PCC) I applied HT using the auto-STF (linked) setting, followed by SCNR (100% average neutral) and finally I (over)boost the saturation using LRGBCombination with its own L channel.

For some reasons, when I do the color correction with PCC, applying auto-STF (linked mode) yields to a huge color cast (it looks way better if I unlinked the channel).

With the classical BN+CC, I get a green cast which is killed once SCNR is applied, but with the data processed with PCC, I don't get similar results at all as you can see.

I tried to change the saturation threshold in PCC down to .25 (also from my data it seems ideally it should be set to 0.21 but I can't, the minimum is 0.25 in the UI), but I got the same results.

Am I doing something wrong or is it a corner case where BN+CC would be better?

Any help/expertize would be greatly appreciated.

7
General / deconvolution and masking
« on: 2017 June 23 02:59:05 »
quick/stupid question to know whether I'm doing it wrong...

Do you remove/protect the big stars (the one with refraction spikes on newtonian scopes) from the general mask when applying a deconvolution or not?

If I don't, the dark rings are harder to prevent from on those stars because they are emphasized by the spikes surrounding them. And if I do prevent from them, the deconvolution is far less effective (on what I really care about), because I had to reduce the effect too much while adjusting the global dark parameter.

So, should I remove the few big stars from my general mask to just ignore them during deconvolution? or should I try to include the spikes as well in that mask? Or am I doing something wrong when fine tuning the deringing parameters and maybe I shouldn't have this issue if I were doing it somehow better/differently?

(it seems to me that reducing deringing also reduces the deconvolution effectiveness)

8
General / 3dplot weirdness
« on: 2016 October 10 14:29:41 »
I'm trying to use 3dplot on a preview window to draw the star profiles.

It works very well with my HistogramTransform'ed image, but it doesn't with my MaskedStretch'ed image.

see the attached images.

Why do I get this weird result with the maskedstretch'ed image? The image looks fine and very similar except the peaks are around 0.8 instead of being ~0.98.

I tried to play with the z scale but it doesn't seem to change anything

9
General / stretch and target background
« on: 2016 October 09 11:42:16 »
It's a simple question I suppose:

I want to stretch my image both with a HistogramTransform and a MaskedStretch and combine the results with a starmask.

if I use STF to generate my HT settings, should I use the same target background I used with STF for my MaskedStretch or should I still use a ReadOut mode to choose the target background in MaskedStretch as often said in tutorials?

I ask because although I choose a target background = 0.15 with STF, my typical background after HT is bellow 0.08 (which is normal I suppose, STF's target background is applying to the whole image, and faint stuff are probably raising the statistics).

In MaskedStretch, should I therefore still use 0.15 as a target background, (same than STF's value) or should I go with 0.08 from what I really observe with the ReadOut mode after my HT stretch? Which one should match better or make more sense overall?

10
General / bringing out faint stuff
« on: 2016 October 02 14:12:35 »
what is your strategy to bring out faint stuff in PI?

for photoshop, I've seen this one: http://www.astronomersdoitinthedark.com/Bringing-Out-the-FaintStuff.php

Would you do it differently with PI? How?

11
Hello,

I'm currently trying to do some DSLR processing using a synthetic L created from ImageIntegration.

But I'm a little bit lost about what to do after deconvolution and linear noise reduction on the Lsyn data.

When should I combine my RGB data back with my Lsyn data?

Initially I thought I was going to do some non linear processing (to bring out faint stuff and HDR/local contrast) with my Lsyn data before the ChannelCombination but then I realized that if I were to do that, I would not be able to do a RepairedHSV before a MaskedStretch.
Or should I do a a ChannelCombination with my linear Lsyn (after deconv+noise reduction for instance), then apply my RepairedHSV/MaskedStretch to bootstrap my non linear processing on the RGB data, while continuing to work on my Lsyn non linear processing in parallel? (and later on do another ChannelCombination once I have a full RGB and Lsyn non linear processing done)?

In other words, I'm quite confused about how long I should keep my Lsyn processing separated and when I should really combine them back. Please help? :-)

PS: Initially I went to the Lsyn road because it seemed a nice idea to minimize the noise and also because I wanted to bring out some faint details and it seemed to make more sense to do so with L. Correct me if I'm on a wrong path.


12
Wish List / swap disk and ramdisk
« on: 2016 September 18 06:09:08 »
It would be nice for modest system to be able to use small ramdisk without limiting the total amount of swap disk.

at the moment, if I create a 4GB ramdisk, even if I provide a path to a SSD, the total amount of swap will be limited to 8GB which is obviously too low. It's unfortunate, for instance on laptop, because a 2-4GB ramdisk could really speedup PI if PI could use that ramdisk for the most recent histories or caches and use the SSD when the ramdisk is out of space (or to move out older stuff from the ramdisk)

13
Bug Reports / storing size of project file (previews?)
« on: 2016 September 11 11:42:49 »
I'm not sure whether it's a bug or a wishlist

I've just noticed that the storing size of a preview, when saving a project with images, seems to be the same than the full image.

my current project has an image stored compressed as roughly 700M and I have 13 (tiny) previews on that image.
Given the saved project data, I can see 14 files with a size of roughly 700M. So it looks like it's somehow storing the preview as a full sized image, yielding to a very big project storage.

I'm surprised, I would have expected a preview to take almost no space to store (just the coordinates), am I missing something?

14
General / working with previews
« on: 2016 September 11 09:59:37 »
hello,

I have two quick questions when working with previews:

- is there a way to "scroll" the left vertical tabs with the previews on the side of the image? I often have many of them, and although I know I can use the next/previous shortcut to jump to the next/previous preview, it would be more convenient if I could just scroll the tabs and click on the one I want. Is there a way to do that?

- is there a way to "save" my previews (in the history for instance, or however)?
I'm trying to preserve my history as much as I can to be able to "replay" each step later on if I need to make adjustments (or just review them).
For instance, early in the workflow I define a bunch of previews to aggregate them for my background neutralization and color calibration. I'd like to be able to retrieve them later on if I want to revisit my workflow, but it's annoying to keep all my previews throughout the whole workflow. I therefore would like to "save" them, and then remove them from my active image before continuing my process. Is there a way to do that without having to clone the image (or generate an empty image and just duplicate the previews, which seems like a hack/workaround to me)?

15
Hello,

following
http://pixinsight.com/forum/index.php?topic=5764.msg39382#msg39382
and
http://pixinsight.com/forum/index.php?topic=8434
I'm trying to make sense of the noise evaluation functions (in both the standalone script and within the subframeselector)

obviously I have a few questions:
- why both scripts are not using BWMV() to scale the result and therefore improve the robustness to be able to compare properly the noises between frames/subframes?
- in subframeselector's noiseOfImage function I can see it is computing the value for layer4 first and going down to layer2 until it find a suitable value. Why?  I don't understand why we don't take the overall RMS or at least the layer1?

- and a side question: with the current noise evaluation in the subframeselector, if I'm using it, does it mean i'm giving up the improved scaled estimators from within the ImageIntegration Process? Should I therefore just use it to reject some frame and rely on the ImageIntegration Process for the weighting?

Sorry, I'm new to PixInsight, hence my questions might be very trivial for most of you

Pages: [1] 2