Author Topic: Second dumb question-about LRBG blending  (Read 51168 times)

Offline Niall Saunders

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1456
  • We have cookies? Where ?
Re: Second dumb question-about LRBG blending
« Reply #30 on: 2010 January 20 16:01:03 »
No laughing here either Harry,

Very nice - I will whip it round and match it to my image, and then do a 'side-by-side' collage with my one. With a widescreen monitor I find that is the best way of getting a clear 'visual' comparison - and I will also 'pad out' the overall image so that it can sit -full-screen' on my 28" Hanns.G monitor at 1920x1200.

I (hopefully) now have Lu data on M1 from Monday night. I just need to calibrate that lot (which means sifting through 294 300s Dark Frames, to find the ones that go best with the 52 x 300s Lights !!), and then I will try an LuRGB combination beside my existing (too 'purplish') HaRGB version (which I would like to revisit anyway, to see of I can control the 'magenta runaway' that happened).

In fact, I should really also keep the intermediate RGB-only image as well (which I do anyway), and 'max out' what can be achieved (sensibly) from the RGB-only data.

Work, work, work - and I STILL haven't editte even one more second of that damned video tutorial (I wish I had NEVER mentioned it here on the Forum, then I could conceivably get away with NOT finishing it  ::))

(Oops, here come the VidTut Police) :police:  :police:  :police:
Cheers,
Cheers,
Niall Saunders
Clinterty Observatories
Aberdeen, UK

Altair Astro GSO 10" f/8 Ritchey Chrétien CF OTA on EQ8 mount with homebrew 3D Balance and Pier
Moonfish ED80 APO & Celestron Omni XLT 120
QHY10 CCD & QHY5L-II Colour
9mm TS-OAG and Meade DSI-IIC

Offline Niall Saunders

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1456
  • We have cookies? Where ?
Re: Second dumb question-about LRBG blending
« Reply #31 on: 2010 January 20 16:18:41 »
Dave,

Sorry, I haven't forgotten about you  ::)

My suggestion - based on how I tackle things now (kind of supported by Juan's suggestions as well) is

Bring in RGB (either as R+G+B or as RGBosc) and then sort THAT image out first (STF, DBE, BackgroundNeutralisation, ColourCalibration, STF>Histo, STF-off, ACDNR (chrominance primarily), HDRW (limited), GreyCstoration (limited), Save) - all of this to MINIMISE chrominance noise

Bring in Lu (and this applies to Ha as well, for HaRGB or (Ha.R)GB type images as well - but Ha+Oiii+Sii images should be processed as above) and sort THAT image out (as above, but aiming for MAXIMUM detail) - and save

Then bring all the data together - splitting the tri-colour image into Rd, Gn and Bu channels (even if the image was originally Ha+Oiii+Sii, it will now BE an RGB image, of some form - and this is where you will get to decide what the final 'pallete convention' will be - according to YOUR tastes).

LuRGB and HaRGB data could be recombined using the LRGB combine tool - where the Lu or Ha data goes into the L-channel and RGB (or whatever you opt for in HaOiiiSii) goes into the RGB channels.

You can also Add, Average, Multiply (with or without scaling) ANY combination that takes your fancy - my Crab was based on Ha multiplied with Rd, and that result shoved back into the R channel, with G and B left 'un-touched'. I have even heard of Lu + (Ha x Rd) + (Oiii x Bu) + (Sii x Gn) algorithms - there is simply NO hard-and-fast rule. Basically, if YOU like the way the image has ended up, then the method you used was CORRECT (no matter what anybody else might say  8))

You just have to 'experiment' - in another five years ::) you will wonder what all the fuss was about :'(

Cheers,
Cheers,
Niall Saunders
Clinterty Observatories
Aberdeen, UK

Altair Astro GSO 10" f/8 Ritchey Chrétien CF OTA on EQ8 mount with homebrew 3D Balance and Pier
Moonfish ED80 APO & Celestron Omni XLT 120
QHY10 CCD & QHY5L-II Colour
9mm TS-OAG and Meade DSI-IIC

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: Second dumb question-about LRBG blending
« Reply #32 on: 2010 January 21 03:08:15 »
Hi,

Trying to resync myself with the forum, which isn't an easy task!

Quote
Specifically on using previews for LRBG combine

No problem at all to use previews; LRGBCombination is a previewable process in PixInsight.

As for the LRGB vs RGB thing, just to state my opinion clear:

- LRGB: Good to save time. This is true as long as RGB is shoot binned; when shooting unbinned L and RGB, the savings are marginal IMO.

- LRGB: Bad for quality. Assuming unbinned data, an independent L does not provide more resolution. At the contrary, it may provide less resolution since it has been acquired through a much wider band pass filter.

- LRGB: Problems to achieve a good match between luminance and chrominance.

- LRGB: More limitations to work with linear data. LRGB combinations are usually performed in the CIE L*a*b* and CIE L*c*h*, which are nonlinear. It is true that a linear LRGB combination is doable in PixInsight, though, working in the CIE XYZ space.

- RGB: Perfect match between luminance and chrominance, by nature. No worries about luminance structures without chrominance support, and vice-versa.

- RGB: A synthetic luminance has the important advantage that we can choose an arbitrary set of weights for the calculation of the luminance (with RGB working spaces in PixInsight). We can define a set of luminance weights that maximize information representation on the luminance, understanding information as data that supports significant object structures).

Quote
...and the merits(?) of processing individual channels (NR etc) BEFORE combining..?

If you insist in doing LRGB, you must process your L and RGB images apart (RGB is one image). The reason is that LRGBCombination works in the CIE L*a*b* and CIE L*c*h* spaces, as noted above. As these spaces are nonlinear, the inputs of LRGBCombination should be nonlinear images, that is, stretched images.

A linear LRGB combination is perfectly possible in PixInsight, however. You must perform it "manually" in the CIE XYZ space with transformations in a linear RGB working space. This is somewhat complex and has several implications. If you're interested, I can write a full description, but I'd recommend you to stay away of these things for now.

If you shoot RGB (welcome!), then there is no need to work on separate RGB images. Join them into a single RGB image as soon as possible. More at the contrary, you can easily complicate things by working out each channel apart.

Keep this in mind: a RGB image is one image, not three images together. A RGB pixel is a vector with three components, that is, a single mathematical object.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: Second dumb question-about LRBG blending
« Reply #33 on: 2010 January 21 03:19:44 »
Quote
If I am going to apply ACDNR (for example) to a image...does it matter whether I do a histo stretch on it FIRST...?

Yes, it matters. There are processes that must be applied to nonlinear data (stretched data). In particular, ACDNR is one of them. You must apply ACDNR after you have stretched the image with HistogramTransformation. Other notable examples of processes requiring nonlinear data are HDRWaveletTransform and GREYCstoration.

Other processes only make sense with linear data. The best example is Deconvolution. ATrousWaveletTransform is neutral, but it can perform extremely well with linear data for edge enhancement and noise reduction. UnsharpMask, for example, performs surprisingly well with linear images, although you can apply it to nonlinear images as well.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline dhalliday

  • PixInsight Old Hand
  • ****
  • Posts: 307
    • on Flickr
Re: Second dumb question-about LRBG blending
« Reply #34 on: 2010 January 21 03:36:10 »
Juan
Thanks again..
OK..so I got some better data on M1 (finally clear !!!)
I will focus on RBG ONLY>>>
And I will just "chuck" the 3 stacks into RBG combine...
(after registering them..(
There is no adjustments in that module...
Its a one shot blend...
So what are/is "RBG working spaces" all about...??

I am getting there...
Hope you are feeling better about your software problems...
When I become world famous... >:D  I will flog your product..


PS How does one do Deconvolution on unstretched data...use screen transfer.?

Dave
Dave Halliday
8" Newtonian/Vixen VC200L/ TV 101,etc etc
SSAG/EQ6
CGE Pro
SBIG ST2K,ST10XME

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: Second dumb question-about LRBG blending
« Reply #35 on: 2010 January 21 03:38:11 »
Quote
After "upsampling" HDR wavelets seemed to treat it much better....or maybe I was delerious,,,

No, you don't hallucinate :) There are instances where upsampling the image may help.

For example, wide field images may require upsampling prior to deconvolution. The reason is that in these images the PSF is usually very small. If the PSF is to be discretized into a kernel of 3x3 or 5x5 pixels for example, we may have a severe lack of accuracy. By upsampling the image 2:1, the PSF can be discretized as a 7x7 or 11x11 kernel, which allows for a more accurate representation of the PSF profile.

With multiscale algorithms, such as HDRWaveletTransform or ATrousWaveletTransform, upsampling can facilitate isolation of image structures into wavelet layers. This happens with high-resolution images, where the first two or three layers can be too "compressed" in their support of structures. By upsampling the image 2:1, the original wavelet layers move one step higher, as a sort of "zoom" applied to the multiscale representation.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline dhalliday

  • PixInsight Old Hand
  • ****
  • Posts: 307
    • on Flickr
Re: Second dumb question-about LRBG blending
« Reply #36 on: 2010 January 21 03:40:48 »
I see I have been "promoted"...
How many box tops does it take to get to "Jedi"... >:D

(tries to sound not secretly pleased by this...!)

Dave
Dave Halliday
8" Newtonian/Vixen VC200L/ TV 101,etc etc
SSAG/EQ6
CGE Pro
SBIG ST2K,ST10XME

Offline Niall Saunders

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1456
  • We have cookies? Where ?
Re: Second dumb question-about LRBG blending
« Reply #37 on: 2010 January 21 03:43:10 »
Thanks for your explanation re. RGB vs LRGB.

Once again, in just a few short paragraphs, you seem to have managed to 'boil off' all the búllsh!t that seems to pervade any discussion of this topic, when associated with PhotoShop, et al.

What you said just seems to 'make sense'. When I think back to my early workflow, using LE, and reading stuff designed to work for PS, I would bring in a perfectly good RGBosc image, and immediately break it into L+R+G+B images - and only THEN would I start hacking my way through the individual data. The usual outcome was nearly always a 'mess'  :'(

So, then I upgraded from a DSI-IC to a DSI-IIPro with individual filters. And that experience was even WORSE  :yell:

This was mostly because I almost NEVER managed to get sufficient data for all four filters, and even if I did, I could never find out why I invariably lost all my colour saturation when I recombined (bearing in mind of course, that all the PS tutorials advised that thr carefully obtained R+G+B data, once calibrated and processed, should then be BLURRED (???) before layering on the calibrated and processed L data.

Looking back on this - what a load of time and photons I was wasting :'(

Now (after spending a lot of time working with JUST the RGBosc data from my DSI-IIC) I am back to using my -IIPro (pending a planned upgrade to something with a 2Kx2K resolution, maybe later this year) and I am realising EXACTLY what you summarised Juan.

Yes, I do (sometimes) want to capture the Lu data as well, and the same goes for the Ha data too. But at least I have experienced what CAN be achieved, and now have your useful tips to help me realise what can be done by having more suitable knowledge in the first place.

Cheers,
Cheers,
Niall Saunders
Clinterty Observatories
Aberdeen, UK

Altair Astro GSO 10" f/8 Ritchey Chrétien CF OTA on EQ8 mount with homebrew 3D Balance and Pier
Moonfish ED80 APO & Celestron Omni XLT 120
QHY10 CCD & QHY5L-II Colour
9mm TS-OAG and Meade DSI-IIC

Offline Niall Saunders

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1456
  • We have cookies? Where ?
Re: Second dumb question-about LRBG blending
« Reply #38 on: 2010 January 21 03:47:30 »
Quote
No, you don't hallucinate  There are instances where upsampling the image may help

My M1 Crab image, shown earlier on the thread, was upsampled immediately after DBE (or, certainly at that point in the fundamental workflow).

I only downsampled again when I was ready to finish with minor Histo, Curve, SCNR and ColourSaturation tweaks. I seem to also remember tweaking the final framing at around the same time (I need to revisit my workflow diagrams for the exact details - as I said, I may turn the workflow into a VidTut when time allows).

Cheers,
Cheers,
Niall Saunders
Clinterty Observatories
Aberdeen, UK

Altair Astro GSO 10" f/8 Ritchey Chrétien CF OTA on EQ8 mount with homebrew 3D Balance and Pier
Moonfish ED80 APO & Celestron Omni XLT 120
QHY10 CCD & QHY5L-II Colour
9mm TS-OAG and Meade DSI-IIC

Offline dhalliday

  • PixInsight Old Hand
  • ****
  • Posts: 307
    • on Flickr
Re: Second dumb question-about LRBG blending
« Reply #39 on: 2010 January 21 03:49:07 »
"Boil off the bullshit"
THAT is what I meant by your style,Juan....
You are "succinct"..what do they call that in Spanish?

Dave
Dave Halliday
8" Newtonian/Vixen VC200L/ TV 101,etc etc
SSAG/EQ6
CGE Pro
SBIG ST2K,ST10XME

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: Second dumb question-about LRBG blending
« Reply #40 on: 2010 January 21 04:07:15 »
Hey Jack,

Quote
...fairly fixed ideas that have been handed down thru the ages.  New ideas, especially ones not proffered by the old boys, are slow if ever to take hold.  Not so here, so carry on!

No old boys here, we are all very very young, especially me! ;D O0

"Fixed ideas" is a concept that does not exist in PixInsight. Or at least I am not aware of any. If I find a fixed idea, be sure I'll unfix it quickly and efficiently ;)

More seriously, I think many of those fixed ideas exist due to limitations in the software tools used. A good example of this problem is data linearity. If an ubiquitous imaging application is unable to handle linear data, then it imposes severe restrictions that in turn cause a lack of perspective in its users, who get accustomed to work in a degraded environment. With the passing time, the degraded environment tends to be seen as the only valid way, and the people that have grown with it develop a strong resistance to changes.

Then it comes into scene a new software application that some people calls "Pixel ... in ... sight?", or something rather odd like that. It comes from Spain, a small and (supposedly) irrelevant country in Europe. Then those guys start speaking of unknown things such as wavelets, multiscale processing, RGB working spaces, object-oriented interfaces, and a plethora of new and strange things that are quite different from layers, hand-painted masks, and the like. Naturally, we are seen just as a problem, not as a way to improve things. This is changing with time, and it has been a long way, as you know well. As I often (too often?) say, we're just at the beginning TM, which is funny  8)
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline mmirot

  • PixInsight Padawan
  • ****
  • Posts: 881
Re: Second dumb question-about LRBG blending
« Reply #41 on: 2010 January 21 08:59:35 »
I suggest you submit a article on the pro and cons of RGB or LRGB ( or others subjects) to Sky and Telescope. It is interesting subject. It could help get PI some marketing exposure.

Max

PS do have to wait until 2.0 for a calibration module.

Offline vicent_peris

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 988
    • http://www.astrofoto.es/
Re: Second dumb question-about LRBG blending
« Reply #42 on: 2010 January 21 09:18:22 »
PS do have to wait until 2.0 for a calibration module.

NO. Now that module has maximum priority. I'm doing some experimentation with superflats, but the math is almost clear to me for all the calibration process... The calibration module will correct for overscan, bias, darks, dark-flats, flats and superflats. Also will rescale the darks even in temperature regulated scenarios. We have a fairly simple and powerful rescaling method.  O0


Vicent.

Offline Harry page

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1458
    • http://www.harrysastroshed.com
Re: Second dumb question-about LRBG blending
« Reply #43 on: 2010 January 21 09:42:31 »
Hi All

While I  Agree with all that Juan has said ( How could I disagree  ::) ) there are reasons that I Like using LRGB which are not always technical which are

1) I can capture RGB with my OSC at the scale I capture the Lum Or capture the colour with a shorter focal scope ( Faster )

2) Use my valuable limited imaging time on the depth of a image ( Lum )

3) Because I work during the day I am rarely able to stay up with my scope , so I set up say my OSC and leave it running for I night to get the RGB and then If I get the chance do the same for the Lum ( Or multiple nights )

Using the above method I am able to reasonable work which I know may not be the best in the world , but are trying to get the best out of the combination of life / Money / skill and Location.  ( I did say skill , but I did not say how much )

I think if I lived with very good skies I probably would just do RGB

Regards Harry
Harry Page

Offline mmirot

  • PixInsight Padawan
  • ****
  • Posts: 881
Re: Second dumb question-about LRBG blending
« Reply #44 on: 2010 January 21 10:55:19 »
One size does not need to fit all.
I think the big picture is what's is really about the trade offs.
Both techinques can produce dazzling results.
The key is understanding what each technque brings to the table in terms acquistions time and as well processing. No need to be dogmatic.


Max