Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - dpaul

Pages: 1 2 [3] 4 5 ... 9
31
General / Re: Debayering One-Shot Color data
« on: 2018 March 04 16:01:55 »
Hi Warren,

Thanks for this, I will only be debayering the integrated FITS frames.

The main problem was trying to debayer 2x2 binned files, I need to work on 1x1 only.
Not tried it yet, been having fun with LRGB imaging with a mono camera and its been snowing for a week in the UK.

Will try again with OSC soon

Thanks

David

32
General / Re: Processing Question: Where did the color go?
« on: 2018 March 04 01:39:37 »
thanks again Rick

David

33
Rick

Thanks for this, I've never used such a mask but will try

David

34
General / Re: Processing Question: Where did the color go?
« on: 2018 March 03 17:40:16 »
Hi Cho,

How about doing partially with Arcsinh and Histogram Transformation Stretch to get color and contrast?

Thanks

David

35
Wim

Thanks for the detailed reply, much appreciated!

I was doing much of your methodolgy except the following:

1/ Never bothered with Linear Fit, but I will take note in future
2/.DBE I've usually done 'after' background neutralisation and color calibration but is probably causing ''splodgy background'' so I'll try later instead
3/. Deconvolution I do on RGB and Luminance but I appreciate the importance of the Luminance for detail so I'll just try that in future
4/. HDR Compression I haven't been using - it gives great contrast but seems to darken images a lot but I'll experiment more!
5/. Morphological Transformation I've alway done very early in the linear stage - I'll try it later after going non-linear.
6/. Currently I'm combing the RGB color calibrated image and the luminance image in linear state and then doing Arcsinh (or Histogram Transformation until very recently). However I agree the RGB and Luminance are not well matched (Lum dominates) so I'll try combining them after stretching.

Regarding Arcsinh, I do like this tool - so far I've set the black point ''just before'' it starts to crop pixels then adjust the stretch factor until. Interestingly some of my data shows much less cropping than others so this might be a useful indicator of how good the image calibration was done with the dark master (e.g. not setting optimisation too high).

What I've also tried is a less agressive Arcsinh and then do a slight bit of Histogram Transformation - the former gives better colors, the latter may help contrast (I think).

One question on using masks - there are two ways I've done this:

(a) Take a clone image and use range selection to get a mono image (of say a galaxy) then blurr it a bit.
(b) Take a clone image, DBE and stretch it

The otion (a) probably protects the areas you want to protects better and softer transitions than option (b). any suggestions on this?

Thanks a lot

David



36
Hi Wim,

That was a 'very' useful thankyou - explains why and not just how.

It clearly shows the order of priority with linear images:

Debanding if required
Background Neutralisation
Color Calibration
Noise reduction (SNCR then TVGDenoise)

Still have a few questions:

1/When should Morphological Transformation and Deconvolution take place compared to the above (and which first, Deconv or MT)?
2/.Until now I've done noise reduction, MT and deconvoltion on separate R, G and B integrated frames (e.g, 20 integrated Red frames), except of course color calibration which is done after channel combination. So I'm wonder if I should channel combine ''first'' then do all the above?

Thanks

David

37
General / Arcsinhstretch vs Histogram Transformation
« on: 2018 March 02 19:46:03 »
Just wanted to share my first attempt at reprocessing the same data of M99. The one that has more 'blue' was using Arcsinhstretch and Multiscale Linear Transform for noise reduction. The other was using Histogram Transformation and TVG Denoise.

In general the arcsinhstretch was processed as follows:

1/. Integrated R,G,B and L were separately deconvoluted, noise reduction then MT for star reduction - all using a galaxy mask (inverted and non-inverted) and a star mask.
2/. Combined R,G, and B into an RGB image using channel combination
3/. Background neutralised the RGB image, color calibrated it and used DBE (with a galaxy inverted mask)
4/. Combined the RGB image and L image by dropping the L onto the RGB using LRGB Combination
5/. Final bit of tweaking the saturation and background darkness using curves (with a galaxy mask, inverted and non inverted

I'm still a novice but just to show the same data can be processed better. I found the MLT easier to use for background noise reduction than TVGdenoise and less chance of a bad result. Also arcsinhstretch seems to give better colors than histogram stretch.

Don't know if I did things in the right order but I'm reasonably happy with the result.

(all images taken with Atik Horizon, 30'' Lockwood optics dobsonian, F3.5)

Thanks

David

38
Hi Carlos,

Did you get chance to see my reply below from February 27th -





Hi Carlos,

Thanks for the reply -
Before I ask some questions related to your note, I have a few fundamental questions:

1/. Which of the following is best:
(a) Use TVG denoise on the R, G, B and L separately?
(b) Channel combine the RGB and then use TVG denoise and separately on the luminance?
(c) Complete the LRGB combination and then (whilst still linear) use TVG denoise?

2/. What should be the order of priority out of the following (and why?):
(a) TVG denoise
(b) MT to reduce star sizes
(c) Deconvolution

3/. When creating a mask to protect a galaxy (for example), I make a clone then stretch it, remove background with DBE then invert it. Should the mask be a clone of a color image (whether single or RGB) or just extract the lightness?

4/. When creating a mask (as in question 3), currently I'm not blurring the edges in any way but I probably should be. I think I should use ''range selection'' on the mask and play around with the 'lower limit' and the smoothness (to blur the edges) - is this the best way?  Also what about star masks, do they need to be blurred?


Now my final questions relates to local support:

5/. When I check 'local support' which support image am I picking, presumably the mask?

6/. assuming I leave midtones, shadows and highlights as default, what about ''noise reduction'' should it be left as zero default?


Many thanks in advance.

Regards

David

39
Hi Simon,

Just my opion but the second picture looks great with more detail in the core.

David

40
General / Re: ArcsinhStretch and White Center in Stars
« on: 2018 February 28 17:38:14 »
What a fantastic tool - just tried it (never realised it existed).
Correct me if I'm wrong but it seems to do the same as Histogram Transfer and Curves Transform in one tool.

I've always stretched the data and then played around with darkening the background with Curve transformation (and an inverted mask to protect a galaxy).
Now with ArcsinhStretch I can stretch the data and darken the background much more effectively and with better colors. I presume its then no longer linear?

A question - what about the Lum content?  If I just use the process on the linear RGB and then add the lum, its washes out the result somewhat. On the other hand it seems less effective when I do use it on a linear LRGB image - which way is best?

Thanks

David

41
Thanks Cho

Regards

David

42
Hi Bernd,

Interesting -

I'll try both ways in due course.

Thanks

David

43
Hi Carlos,

Thanks for the reply -
Before I ask some questions related to your note, I have a few fundamental questions:

1/. Which of the following is best:
(a) Use TVG denoise on the R, G, B and L separately?
(b) Channel combine the RGB and then use TVG denoise and separately on the luminance?
(c) Complete the LRGB combination and then (whilst still linear) use TVG denoise?

2/. What should be the order of priority out of the following (and why?):
(a) TVG denoise
(b) MT to reduce star sizes
(c) Deconvolution

3/. When creating a mask to protect a galaxy (for example), I make a clone then stretch it, remove background with DBE then invert it. Should the mask be a clone of a color image (whether single or RGB) or just extract the lightness?

4/. When creating a mask (as in question 3), currently I'm not blurring the edges in any way but I probably should be. I think I should use ''range selection'' on the mask and play around with the 'lower limit' and the smoothness (to blur the edges) - is this the best way?  Also what about star masks, do they need to be blurred?


Now my final questions relates to local support:

5/. When I check 'local support' which support image am I picking, presumably the mask?

6/. assuming I leave midtones, shadows and highlights as default, what about ''noise reduction'' should it be left as zero default?


Many thanks in advance.

Regards

David

44
Hi Cho,

Thanks for the note.

Just to confirm a few thing:

1/.In statistics do you mean avgDev is the standard deviation?

2/. When I then ''use this value as the edge protection value'' do I simply adjust the exponent and the slider to match that number? 

Regards

David


45
Hi Bernd,

The Camera is Atik Horizon (mono) with cooled CMOS sensor.

When using the default optimisation of 3, the warning usually occurs with the luminance frames and not the RGB ones. Not all the time, say 50% of occasions.
So it would seem to be occuring when the frames are more light saturated (appear a lot brigher unstretched than the RGB). Maybe I could also slightly back-off on the exposure length if necessary. I live in a reasonably dark area, visible mag about 5.5 to the naked eye but there is some light pollution. I'm not using any other filters except for the Baader LRGB set.

I'll take note of the warnings when they occur and drop the optimisation a little.

Thanks for the input!

Regards

David





Pages: 1 2 [3] 4 5 ... 9