Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - dpaul

Pages: 1 2 [3] 4 5 ... 9
31
General / Re: Processing Question: Where did the color go?
« on: 2018 March 03 17:40:16 »
Hi Cho,

How about doing partially with Arcsinh and Histogram Transformation Stretch to get color and contrast?

Thanks

David

32
Wim

Thanks for the detailed reply, much appreciated!

I was doing much of your methodolgy except the following:

1/ Never bothered with Linear Fit, but I will take note in future
2/.DBE I've usually done 'after' background neutralisation and color calibration but is probably causing ''splodgy background'' so I'll try later instead
3/. Deconvolution I do on RGB and Luminance but I appreciate the importance of the Luminance for detail so I'll just try that in future
4/. HDR Compression I haven't been using - it gives great contrast but seems to darken images a lot but I'll experiment more!
5/. Morphological Transformation I've alway done very early in the linear stage - I'll try it later after going non-linear.
6/. Currently I'm combing the RGB color calibrated image and the luminance image in linear state and then doing Arcsinh (or Histogram Transformation until very recently). However I agree the RGB and Luminance are not well matched (Lum dominates) so I'll try combining them after stretching.

Regarding Arcsinh, I do like this tool - so far I've set the black point ''just before'' it starts to crop pixels then adjust the stretch factor until. Interestingly some of my data shows much less cropping than others so this might be a useful indicator of how good the image calibration was done with the dark master (e.g. not setting optimisation too high).

What I've also tried is a less agressive Arcsinh and then do a slight bit of Histogram Transformation - the former gives better colors, the latter may help contrast (I think).

One question on using masks - there are two ways I've done this:

(a) Take a clone image and use range selection to get a mono image (of say a galaxy) then blurr it a bit.
(b) Take a clone image, DBE and stretch it

The otion (a) probably protects the areas you want to protects better and softer transitions than option (b). any suggestions on this?

Thanks a lot

David



33
Hi Wim,

That was a 'very' useful thankyou - explains why and not just how.

It clearly shows the order of priority with linear images:

Debanding if required
Background Neutralisation
Color Calibration
Noise reduction (SNCR then TVGDenoise)

Still have a few questions:

1/When should Morphological Transformation and Deconvolution take place compared to the above (and which first, Deconv or MT)?
2/.Until now I've done noise reduction, MT and deconvoltion on separate R, G and B integrated frames (e.g, 20 integrated Red frames), except of course color calibration which is done after channel combination. So I'm wonder if I should channel combine ''first'' then do all the above?

Thanks

David

34
General / Arcsinhstretch vs Histogram Transformation
« on: 2018 March 02 19:46:03 »
Just wanted to share my first attempt at reprocessing the same data of M99. The one that has more 'blue' was using Arcsinhstretch and Multiscale Linear Transform for noise reduction. The other was using Histogram Transformation and TVG Denoise.

In general the arcsinhstretch was processed as follows:

1/. Integrated R,G,B and L were separately deconvoluted, noise reduction then MT for star reduction - all using a galaxy mask (inverted and non-inverted) and a star mask.
2/. Combined R,G, and B into an RGB image using channel combination
3/. Background neutralised the RGB image, color calibrated it and used DBE (with a galaxy inverted mask)
4/. Combined the RGB image and L image by dropping the L onto the RGB using LRGB Combination
5/. Final bit of tweaking the saturation and background darkness using curves (with a galaxy mask, inverted and non inverted

I'm still a novice but just to show the same data can be processed better. I found the MLT easier to use for background noise reduction than TVGdenoise and less chance of a bad result. Also arcsinhstretch seems to give better colors than histogram stretch.

Don't know if I did things in the right order but I'm reasonably happy with the result.

(all images taken with Atik Horizon, 30'' Lockwood optics dobsonian, F3.5)

Thanks

David

35
Hi Carlos,

Did you get chance to see my reply below from February 27th -





Hi Carlos,

Thanks for the reply -
Before I ask some questions related to your note, I have a few fundamental questions:

1/. Which of the following is best:
(a) Use TVG denoise on the R, G, B and L separately?
(b) Channel combine the RGB and then use TVG denoise and separately on the luminance?
(c) Complete the LRGB combination and then (whilst still linear) use TVG denoise?

2/. What should be the order of priority out of the following (and why?):
(a) TVG denoise
(b) MT to reduce star sizes
(c) Deconvolution

3/. When creating a mask to protect a galaxy (for example), I make a clone then stretch it, remove background with DBE then invert it. Should the mask be a clone of a color image (whether single or RGB) or just extract the lightness?

4/. When creating a mask (as in question 3), currently I'm not blurring the edges in any way but I probably should be. I think I should use ''range selection'' on the mask and play around with the 'lower limit' and the smoothness (to blur the edges) - is this the best way?  Also what about star masks, do they need to be blurred?


Now my final questions relates to local support:

5/. When I check 'local support' which support image am I picking, presumably the mask?

6/. assuming I leave midtones, shadows and highlights as default, what about ''noise reduction'' should it be left as zero default?


Many thanks in advance.

Regards

David

36
Hi Simon,

Just my opion but the second picture looks great with more detail in the core.

David

37
General / Re: ArcsinhStretch and White Center in Stars
« on: 2018 February 28 17:38:14 »
What a fantastic tool - just tried it (never realised it existed).
Correct me if I'm wrong but it seems to do the same as Histogram Transfer and Curves Transform in one tool.

I've always stretched the data and then played around with darkening the background with Curve transformation (and an inverted mask to protect a galaxy).
Now with ArcsinhStretch I can stretch the data and darken the background much more effectively and with better colors. I presume its then no longer linear?

A question - what about the Lum content?  If I just use the process on the linear RGB and then add the lum, its washes out the result somewhat. On the other hand it seems less effective when I do use it on a linear LRGB image - which way is best?

Thanks

David

38
Thanks Cho

Regards

David

39
Hi Bernd,

Interesting -

I'll try both ways in due course.

Thanks

David

40
Hi Carlos,

Thanks for the reply -
Before I ask some questions related to your note, I have a few fundamental questions:

1/. Which of the following is best:
(a) Use TVG denoise on the R, G, B and L separately?
(b) Channel combine the RGB and then use TVG denoise and separately on the luminance?
(c) Complete the LRGB combination and then (whilst still linear) use TVG denoise?

2/. What should be the order of priority out of the following (and why?):
(a) TVG denoise
(b) MT to reduce star sizes
(c) Deconvolution

3/. When creating a mask to protect a galaxy (for example), I make a clone then stretch it, remove background with DBE then invert it. Should the mask be a clone of a color image (whether single or RGB) or just extract the lightness?

4/. When creating a mask (as in question 3), currently I'm not blurring the edges in any way but I probably should be. I think I should use ''range selection'' on the mask and play around with the 'lower limit' and the smoothness (to blur the edges) - is this the best way?  Also what about star masks, do they need to be blurred?


Now my final questions relates to local support:

5/. When I check 'local support' which support image am I picking, presumably the mask?

6/. assuming I leave midtones, shadows and highlights as default, what about ''noise reduction'' should it be left as zero default?


Many thanks in advance.

Regards

David

41
Hi Cho,

Thanks for the note.

Just to confirm a few thing:

1/.In statistics do you mean avgDev is the standard deviation?

2/. When I then ''use this value as the edge protection value'' do I simply adjust the exponent and the slider to match that number? 

Regards

David


42
Hi Bernd,

The Camera is Atik Horizon (mono) with cooled CMOS sensor.

When using the default optimisation of 3, the warning usually occurs with the luminance frames and not the RGB ones. Not all the time, say 50% of occasions.
So it would seem to be occuring when the frames are more light saturated (appear a lot brigher unstretched than the RGB). Maybe I could also slightly back-off on the exposure length if necessary. I live in a reasonably dark area, visible mag about 5.5 to the naked eye but there is some light pollution. I'm not using any other filters except for the Baader LRGB set.

I'll take note of the warnings when they occur and drop the optimisation a little.

Thanks for the input!

Regards

David





43
Hi John,

I did use a mask - there is actually some subtle banding around the outside of the galaxy too - more obvious when the original higher quality image is opened up.

Regards

David

44
Further to my note below, I managed to avoid most of the contour evidence with the right background darkness and background extraction.

Attached is an image of M65 - it was with a first quarter moon.

David


45
Hi Bernd,

Thanks for the quick reply - yes I did see you interesting previous post before I even sent mine - no final conclusion there but I could the logic of challenging why the optimisation value of 3.0 was necessarily the best one.

My comments were not making a judgement, more an observation!

I did a test night on a single light frame (Luminance) that was calibrated at varying optimisation levels, the optimisation window was fixed at 1024 and everything was default. Here are the pixel counts (% in brackets) using statistics (binning was 2x2):

Uncalibrated raw frame ---- 4066260 (99.89667%)
Optimisation level 0 - ---4070466 (100%)
Optimisation level 0.1----4070466 (100%)
Optimisation level 0.5---ditto
1.0---ditto
2.0---ditto
3.0---ditto
5.0---ditto
7.5---ditto
10.0---ditto

For interest, the uncalibrated frame detailed statistics were:
mean 22166
med 21952
min 14928
max 65520

The 0.0 optimisation level calibrated frame details were:
mean 20900
med 20641
min 13637
max 64239

The 3.0 optimisation level calibrated frame details were:
mean 19905
med 19644
min 12740
max 63310

The 10.0 otimisation level calibrated frame details were:
mean 13909
med 13637
min 7050
max 57730

The higher the optimisation level the darker the appearance of the unstretched frames, the histogram peak effectiveley moves to the left. I have no idea whether it makes any difference at all but when combining the LRGB results later, there seems to be a better balance between the lum and RG and B frames.

This may be coincidental but a quick trial of processing M65 with the optimisation at 10.0 seemed to give better color calibration (using color calibration).

Certainly the warning of ''optimisation threshold may be set too high'' when I'm using the default of 3.0 shouldn't be an issue - if I slide down to 2.0 then is that really benefitting me?

Again, these are all observations, I'm open to logical conclusions that suggest what's best.

Thanks

David





 

Pages: 1 2 [3] 4 5 ... 9