Author Topic: Second dumb question-about LRBG blending  (Read 31857 times)

Offline dhalliday

  • PixInsight Old Hand
  • ****
  • Posts: 307
    • View Profile
    • on Flickr
Second dumb question-about LRBG blending
« on: 2010 January 15 01:46:16 »
OK....RBG/LRBG...
I believe I can play around with this stuff and see what works best,etc....although there seems to be a HECK of a lot of difference depending on where you put all those sliders... >:D
Here is where I have stumbled to;
http://www.flickr.com/photos/daveh56/4275795438/sizes/l/

SO...my SECOND question is...
For each individual file (stack of calibrated frames)....
HOW MUCH PROCESSING SHOULD I DO BEFORE COMBINING THEM ALL..??
Apply NR? GreyCstoration?....some..? a lot ??
Histo stretch?....
ABE..? (this seems important,as my flats all vary a bit in effectivness...)
Deconvolution...?...HDR??
Most important,and basic is that it would seem that I cannot always get the 4 sets of data to have similar histograms...ie some are darker,and some have more detail,ex M1 seems fainter in Green,brighter in red...
Is there a way to "match" all 4  data sets..? ie BEFORE hitting the "blend" button ??
Do I just try to get 4 images with the same sky intensity...?

I suspect there is a trick to this...there always is...I am looking lame again.. :-[

By the way I used "Star alignment" to register all 4 subimages...it worked like magic...this is OK to use ??

Sorry...I am slowly realizing how simple the good old DSLR days were...
For some reason my Lum data looks WORSE than the color...the M1 is fainter,the background noisier....is this somehow related to the fact that there must be more sky gradient (noise?) in this channel..?
Or do I just have a bad stack/bad flat maybe...?
Should I be CAPTURING the L data differently...?shorter subs..?

thanks again

Dave
PS I went back,...and looked at files aligned with "star alignment"...(vs dynamic alignment..)

They have been altered somehow...the histogram(data set) is changed for the worse somehow...what is up with that..??? :surprised:
« Last Edit: 2010 January 15 03:16:23 by dhalliday »
Dave Halliday
8" Newtonian/Vixen VC200L/ TV 101,etc etc
SSAG/EQ6
CGE Pro
SBIG ST2K,ST10XME

Offline dhalliday

  • PixInsight Old Hand
  • ****
  • Posts: 307
    • View Profile
    • on Flickr
Re: Second dumb question-about LRBG blending
« Reply #1 on: 2010 January 15 12:06:49 »
Just noticed (!!) Juan has a tutorial on this... >:D
Any thoughts would still be appreciated by a rookie..

Dave
Dave Halliday
8" Newtonian/Vixen VC200L/ TV 101,etc etc
SSAG/EQ6
CGE Pro
SBIG ST2K,ST10XME

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 5811
    • View Profile
    • http://pixinsight.com/
Re: Second dumb question-about LRBG blending
« Reply #2 on: 2010 January 15 13:32:48 »
Hi Dave,

I'll try to answer some of your questions.

Quote
RBG/LRBG

It depends:

- If quality is more important than saving time: RGB, unbinned.
- If saving time is more important than quality: LRGB, L unbinned, RGB binned 2x2

But please don't shoot unbinned LRGB. Each time you add more luminance you loose chrominance. So acquiring unbinned LRGB data is a waste in both time and quality.

Quote
there seems to be a HECK of a lot of difference depending on where you put all those sliders

Yeah, welcome to the PI world  8)

Quote
For each individual file (stack of calibrated frames)....
HOW MUCH PROCESSING SHOULD I DO BEFORE COMBINING THEM ALL..??

I assume you're talking about the R, G, B (and possibly L) stacks. Combine them into a RGB image as soon as you get them. Don't change them before combining because this will only make things a lot harder.

Quote
Apply NR? GreyCstoration?....some..? a lot ??
Histo stretch?....
ABE..? (this seems important,as my flats all vary a bit in effectivness...)
Deconvolution...?...HDR??

Stop, stop... Keep it simple, please! :police: Deconvolution should only be used with high-SNR linear data; otherwise stay well away of it. In general, I'd recommend a relatively simple sequence such as:

- Good calibration. This is very important. Lots of problems in the postprocessing stages (which are in fact impossible to fix completely) can be avoided if the images are well calibrated. In particular, flat fielding is essential, so investing more efforts to improve them is always a good idea.

- Good gradient removal. This should be your first step. Apply DBE (or ABE, at your option and/or if applicable) to linear data.

- BackgroundNeutralization + ColorCalibration

- Deconvolution and/or wavelets would come here, but only if you have very good data in terms of SNR.

- HistogramTransformation to apply the first nonlinear transformation.

- HDRWaveletTransform, perhaps. This depends on the image and on what you want to achieve. Not all images benefit from HDRWT, and not all of them in the same way.

- Noise reduction. Use ACDNR at this point. GREYCstoration is an anisotropic algorithm, and isotropy is a very important property in astronomical images.

- CurvesTransformation. I personally think that curving should be reduced to a minimum.

- ColorSaturation, which can also be implemented with CurvesTransformation.

- Final noise reduction with ACDNR and/or GREYCstoration.

Quote
By the way I used "Star alignment" to register all 4 subimages...it worked like magic...this is OK to use ??

Oh yes, it is absolutely OK. StarAlignment is one of our "jewels of the crown" :)

Quote
PS I went back,...and looked at files aligned with "star alignment"...(vs dynamic alignment..)
They have been altered somehow...the histogram(data set) is changed for the worse somehow...what is up with that..

Strange indeed. If you upload some of these images for us to see, then we can try to diagnose what's happening.

Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Niall Saunders

  • PTeam Member
  • PixInsight Jedi
  • *****
  • Posts: 1280
  • We have cookies? Where ?
    • View Profile
Re: Second dumb question-about LRBG blending
« Reply #3 on: 2010 January 15 14:09:33 »
Quote
Good calibration. This is very important. Lots of problems in the postprocessing stages (which are in fact impossible to fix completely) can be avoided if the images are well calibrated. In particular, flat fielding is essential, so investing more efforts to improve them is always a good idea.

Thanks for stating that Juan,

It is a point that often try to emphasise, but it is also a point that I find tends to be 'ignored' more often than not.

Perhaps it is the hope by astroimagers that PI can 'make good' all the errors that they are not taking care of at acquisition time - and, of course, they (believe) that they are correct - PI is a 'magic paintbrush', that will cure all evils  ::)

I am also often asked why I never publish any of my images, yet - at the same time - I seem to constantly spout out miles of (unsubstantiated) 'drivel' about HOW to either acquire, or process, images. The point is, I am not READY to publish any images - simply because I am still a BEGINNER when it comes to the image acquisition phase. I am still learning, and that is after FIVE years, and HUNDREDS of gigabytes of acquired images.

Once I 'perfect' my image acquisition and calibration phases, then I can start all over again 'learning' how to post-process them.

Fortunately, I am not in any rush - and have no desire to receive 'back-slapping' compliments of images that are really only 'mediocre'.

In the meantime, I am going to re-calibrate and realign my most recently acquired data (M1, from back in Nov '09), now that I have a far greater understanding of Image Integration. (And, I have also just completed design plans for a FlatField lightbox for my 8" LX90 - the lack of which, to date, has rendered all of my imaging using that scope utterly 'mediocre').

From bitter experience, I have now no doubt that FlatFielding (properly implemented) is at least as essential a purchasing an imaging camera in the first place !!

Cheers,
Cheers,
Niall Saunders
Clinterty Observatories
Aberdeen, UK

Altair Astro GSO 10" f/8 Ritchey Chrétien CF OTA on EQ8 mount with homebrew 3D Balance and Pier
Moonfish ED80 APO & Celestron Omni XLT 120
QHY10 CCD & QHY5L-II Colour
9mm TS-OAG and Meade DSI-IIC

Offline vicent_peris

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 950
    • View Profile
    • http://www.astrofoto.es/
Re: Second dumb question-about LRBG blending
« Reply #4 on: 2010 January 15 15:36:37 »
Quote
Good calibration. This is very important. Lots of problems in the postprocessing stages (which are in fact impossible to fix completely) can be avoided if the images are well calibrated. In particular, flat fielding is essential, so investing more efforts to improve them is always a good idea.

Thanks for stating that Juan,

It is a point that often try to emphasise, but it is also a point that I find tends to be 'ignored' more often than not.

Perhaps it is the hope by astroimagers that PI can 'make good' all the errors that they are not taking care of at acquisition time - and, of course, they (believe) that they are correct - PI is a 'magic paintbrush', that will cure all evils  ::)

I am also often asked why I never publish any of my images, yet - at the same time - I seem to constantly spout out miles of (unsubstantiated) 'drivel' about HOW to either acquire, or process, images. The point is, I am not READY to publish any images - simply because I am still a BEGINNER when it comes to the image acquisition phase. I am still learning, and that is after FIVE years, and HUNDREDS of gigabytes of acquired images.

Once I 'perfect' my image acquisition and calibration phases, then I can start all over again 'learning' how to post-process them.

Fortunately, I am not in any rush - and have no desire to receive 'back-slapping' compliments of images that are really only 'mediocre'.

In the meantime, I am going to re-calibrate and realign my most recently acquired data (M1, from back in Nov '09), now that I have a far greater understanding of Image Integration. (And, I have also just completed design plans for a FlatField lightbox for my 8" LX90 - the lack of which, to date, has rendered all of my imaging using that scope utterly 'mediocre').

From bitter experience, I have now no doubt that FlatFielding (properly implemented) is at least as essential a purchasing an imaging camera in the first place !!

Cheers,

Hi Niall,

you're correct, an excellent image calibration is very important. But don't wait to have a perfectly calibrated dataset to start post-processing it. When you process your uncalibrated data, you don't understand how important is a good calibration. I. e., you don't know how deep can be your data if you don't have good flatfields.

But don't be in the opposite side. You cannot know what's a good calibration if you're not able to full exploit your data in the post-processing.

It's absolutely important to see and feel acquisition and post-processing as a whole. Make a good calibration to go further in the post-processing; have good techniques in post-processing to see where your calibration fails.

In the next months a will publish some articles about some observational techniques I have been developing to better exploit our data.


Regards,
Vicent.

Offline Harry page

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1431
    • View Profile
    • http://www.harrysastroshed.com
Re: Second dumb question-about LRBG blending
« Reply #5 on: 2010 January 15 16:56:16 »
hi


Come on Niall   Post some of your work I could do with chearing up  ;D



Harry
Harry Page

Offline dhalliday

  • PixInsight Old Hand
  • ****
  • Posts: 307
    • View Profile
    • on Flickr
Re: Second dumb question-about LRBG blending
« Reply #6 on: 2010 January 15 21:12:30 »
Juan
Thanks a lot...that was VERY helpful...!!
And you have a soothing/reassuring tone... O:)
I always enjoy your style....

So...is it fair to say that for "quality" seeking types..The whole L/binned RBG is WRONG...?
Funny..that is not what one "sees" out there...Why DO they have that clear filter on my camera...
It makes sense though,that the L data would have a worse SNR...
I always got more data with a UHC filter on my DSLR...and I guess the RBG filters are reducing the sky noise...???

So I am going to go back to the (small amount) of M1 data I have and focus on the RBG...
Do I THEN extract a "L" channel from that...?? 
Or do I need to bother with L at all..??

Is the extracted "L" data/channel  (I am speculating here) the data that one spends MORE time processing..?
(As opposed to AQUIRED "L" data...?)
I am not familiar with Background neutralization,OR color calibration...
I will have a look..and am parsing the RBGL tutorial as well...
Again...just to be clear...
Do I apply this "basic" processing to each stack (RBG) and then combine them...or just calibrate,stack and combine whatever comes out of DSS..??

If I continue to find issues with "star alignment vs Dynamic alignment I will post some examples...

again thanks

Dave
PS Niall...you are not shy like me are you..?

 
Dave Halliday
8" Newtonian/Vixen VC200L/ TV 101,etc etc
SSAG/EQ6
CGE Pro
SBIG ST2K,ST10XME

Offline Harry page

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1431
    • View Profile
    • http://www.harrysastroshed.com
Re: Second dumb question-about LRBG blending
« Reply #7 on: 2010 January 15 21:21:01 »
Hi Dave

Can you tell us what camera you are using


Harry
Harry Page

Offline dhalliday

  • PixInsight Old Hand
  • ****
  • Posts: 307
    • View Profile
    • on Flickr
Re: Second dumb question-about LRBG blending
« Reply #8 on: 2010 January 15 22:56:24 »
Harry
Its a SBIG ST2000XM...

Listen...I went back and took JUST the RBG files,and combined them in the "Channel Combination" method...
This does not give a great result...
THEN I went back and tried to follow the tutorial;
http://pixinsight.com/tutorials/STD/LRGB/en.html

This just confused the HECK >:D  out of me...because the Misti data INCLUDES an L channel...
All I have at this point is a (bad) RBG combo called "image 25"..or whatever...
THEN I see him talking about ONLY ticking the L channel in LRBG Combination....
But I thought I was being told to forget (all about) the L channel...What am I using for "L"..??
Why not use my RBG channels in RBGL combine......................???????????????????

Getting a wee migraine here... its starting to sound not straightforward at all..... >:(
PS here is JUST the RBG data...which looks about the same as the LRBG...
http://www.flickr.com/photos/daveh56/4277015687/sizes/o/
Of cource this is only 20 exposures (60 seconds) of R/B/G...maybe the pixels are just not there...
(NOW he tells them...)

Dave
« Last Edit: 2010 January 15 23:30:16 by dhalliday »
Dave Halliday
8" Newtonian/Vixen VC200L/ TV 101,etc etc
SSAG/EQ6
CGE Pro
SBIG ST2K,ST10XME

Offline Niall Saunders

  • PTeam Member
  • PixInsight Jedi
  • *****
  • Posts: 1280
  • We have cookies? Where ?
    • View Profile
Re: Second dumb question-about LRBG blending
« Reply #9 on: 2010 January 16 00:30:58 »
Hi Vincent,

I do agree with everything you say - in fact, by limiting myself to the DSI-IIC and DSI-IIPro, my images have always been a challenge. A challenge that forced me to learn as much as I could about acquisition techniques, temperature matching (the DSI-II is not a cooled camera, but it does record the CCD temperature of each exposure in the FITS header, so that helps). It has forced me to understand WHY Flat frames are critical, and also to establish a method for determining that my Flats themselves were as good as I could get them.

I have learned how to Focus. I have learned how to Polar Align. I have learned how to AutoGuide. I even know now that Differential Flexure, for me, limits my maximum exposures to 8 minutes (if I intend to remain inside a 1/2 pixel 'flex zone' when I am imaging on my Moonfish ED80).

I have learned how to deBayer. I have learned how to work with colour filters. I have learned about noise and signal to noise ratios. I have even relearned what little I knew about Statistics.

I certainly now feel that I know HOW to acquire 'good' data, and I am confident in being able to work my way through the calibration process, and even the rudiments of PI are becoming more 'second nature' to me - even to the point where I now have the confidence to pass some of what I have learned on to those following behind.

What I simply have NOT got is suitable data to make a 'nice' image - and that is just down to local weather, and rudimentary equipment. It is very difficult to get enough data on a single target when there is really not much more than ONE opportunity to image that object each year. It really is that simple - I get, on average, ONE suitable evening per month - and, sometimes, not even that.

So, I get a LOT of time to 'play' with my mediocre data - before consigning it to some dusty hard-drive in the corner of the room once a new session presents itself - with a new target, and new-found knowledge to capture it with !!

But, learning - and sharing that new-found knowledge - is actually just as pleasurable, for me, as publishing an image to critical acclaim.

However, I am intending trying to put together a video of MY workflow, on my recent (Nov 09) multi-session imaging attempt on M1. In which case I will be looking more for critical comment on my workflow than on my final image.

Because that is where PixInsight is failing. Everyone KNOWS how powerful PixInsight is, in knowledgable hands, but folks with far better raw data than I are struggling to grasp basic concepts, and if we can't get them swiftly over the first hurdles, PixInsight may stagnate simply because the user base remains too small.

Which would be a shame, given the enormous potential of PI.

So, yes, I will get an image onto your screens, but I would rather that you laughed at HOW I made any silly mistakes during the post-processing (or even during the creation of the video, which I intend to hardly edit - it will be published 'warts and all' !!) than having you laugh at my wax-crayon, impressionalist interpretation of an otherwise APOD masterpiece, Harry  :laugh: :laugh:

Cheers,
Cheers,
Niall Saunders
Clinterty Observatories
Aberdeen, UK

Altair Astro GSO 10" f/8 Ritchey Chrétien CF OTA on EQ8 mount with homebrew 3D Balance and Pier
Moonfish ED80 APO & Celestron Omni XLT 120
QHY10 CCD & QHY5L-II Colour
9mm TS-OAG and Meade DSI-IIC

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 5811
    • View Profile
    • http://pixinsight.com/
Re: Second dumb question-about LRBG blending
« Reply #10 on: 2010 January 16 01:21:14 »
Hi Dave,

Quote
I always enjoy your style

Thanks! That's funny, as I wasn't aware that I have an style when I write in English. Nice to know! 8)

Quote
So...is it fair to say that for "quality" seeking types..The whole L/binned RBG is WRONG...?

In my humble opinion, yes. Two important facts must be taken into account with respect to LRGB:

- Each time you throw more luminance to your image, it loses chromatic contents. As a result, your chrominance will contain more noise and less signal (more uncertainty), you'll have more difficulties to achieve good color saturation, etc. This forces you to acquire more chrominance to provide support for the excess of luminance.

- OK, so we need more RGB data to compensate for the overabundance of L. By acquiring binned RGB, we can get the required chrominance in less integration time, due to the increase in sensitivity (each 2x2 binned superpixel provides a 4:1 increase in SNR). However, this is achieved at the expense of reduced spatial resolution. Note also that binning reduces readout noise, but not dark current noise. It is true that our vision system detects most of the image detail through the luminance (that's the only reason why the LRGB trick can be useful), but it is also true that small-scale luminance structures without good chrominance support tend to be desaturated (rendered as grayscale).

In short, LRGB can be a good idea to save time because it allows you to acquire the whole chrominance with increased SNR. However, it is not true that a LRGB image is better than the equivalent (in terms of acquired signal) RGB image.

What is clearly an error, in my opinion, is to acquire unbinned LRGB data. By doing so, you obviously cannot save time. And if you save time, then you have acquired an excess of luminance, and hence your image lacks chrominance.

Of course, this is only my opinion.

Quote
THEN I went back and tried to follow the tutorial;
http://pixinsight.com/tutorials/STD/LRGB/en.html

I apologize for the confusion. I wrote that tutorial in 2006, if I can remember well. Many important things have changed since then. In the new website (which I am finishing now), that tutorial will be tagged as obsolete, if not completely removed. Today I would process the same data in a completely different way —in a way that is much more respectful with the data, and also much more efficient.

So please use that tutorial just to understand the practical usage of several tools, but don't follow its general "style".

Quote
What am I using for "L"..??

If you acquire LRGB data, then you already have a separate L image. You can process it as an independent grayscale image, as explained in the tutorial. Note also that in PixInsight you can process the luminance of a RGB image (in this case, after performing the LRGB combination) independently of the chrominance, without needing to have both components as individual images. For example, the ATrousWaveletTransform tool provides several options to process luminance only, chrominance only, or luminance+chrominance. Other tools have "To Luminance" check boxes that can be used to restrict processing to the luminance.

If you acquire RGB data, then your luminance is synthetic. You can either extract it as an independent image (with the ChannelExtraction process, for example), or process it using "To Luminance" options, as above.

In both the RGB and LRGB cases, by not extracting the luminance you get an important bonus: you can apply a process to the luminance and see immediately its true effect on the whole RGB image. The price of this is a small lack in performance: each time you apply a process the luminance must be synthesized, processed, then reinserted in the RGB image. But the benefits clearly surpass the extra computational work.

A synthetic luminance (RGB) has more implications. One of them is that you no longer have to concern yourself about good adaptation between luminance and chrominance (which is a serious challenge with LRGB images): the adaptation is perfect by nature.

Another important implication is that extra care must be taken to ensure that a linear luminance will always be synthesized while the RGB image is still linear. This is relevant, for example, if you process data acquired as RGB with tools such as Deconvolution, ATrousWaveletTransform or UnsharpMask:

- A linear RGB working space (RGBWS) must always be used to process linear RGB images. A linear RGBWS has a value of gamma equal to one. The RGBWorkingSpace tool can be used to set a linear RGBWS.

- The Y component of the CIE XYZ space must be used as the linear luminance, instead of the L* component of CIE L*a*b*, as usual. This is because Y is a linear function of RGB when gamma=1, while L* is always nonlinear. On Deconvolution, you must check both the "Luminance" and "Linear" check boxes. The same is true for ATrousWaveletTransform, where you must select the "Luminance, linear" target mode.

Quote
Getting a wee migraine here... its starting to sound not straightforward at all.....

Come on, it may be not very straightforward (and it isn't, I think), but isn't it funny? O0
« Last Edit: 2010 January 16 01:26:29 by Juan Conejero »
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline vicent_peris

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 950
    • View Profile
    • http://www.astrofoto.es/
Re: Second dumb question-about LRBG blending
« Reply #11 on: 2010 January 16 10:40:22 »
Hello,

my experience with pure RGB imaging is so good that I won't take L frames anymore. RGB imaging requires more patience (a thing hard to find today), but results are far better. The main disadvantage of LRGB imaging is the lack of color in the faintest parts of the objects. If you make 10 hours L and 1 hour of each RGB, your L will go far deeper. So there will be a lot of objects that are pure noise pixels in the RGB images. One example will clarify this...

This is a photo we're processing in the DSA right now- It's M74 with the 1.23 meter scope at CAHA. It has a total exposure of 19 hours through Johnson BVR filters. This is a VERY nasty, 30 min processing (sorry!):

http://www.astrofoto.es/M74.jpg

See that each object has its color: from the galaxy core to the farthest gals.

And this photo, of the same object, taken by Capella Observatory at Skinakas Observatory. It's more than 4 hour exposure through RGB filters with a 60 cm scope:

http://www.capella-observatory.com/images/Galaxies/M74Big.jpg

Despite the lack of deepness of the last photo, it's clear that RGB data doesn't support completely the L data. See that, from a given illumination level, color information is VERY weak. This is easily visible through the spiral arms. And, of course, color has almost disappeared on the tiny background objects.

My advice, in case you have:

- enough time to expose your camera to light (weather permitting?)
- enough patience

is to go for pure RGB. Do you want L? Make a B&W photo.  >:D Or, in case o doing LRGB, not too much L please...


Regards,
Vicent.


Offline vicent_peris

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 950
    • View Profile
    • http://www.astrofoto.es/
Re: Second dumb question-about LRBG blending
« Reply #12 on: 2010 January 16 10:47:57 »
What I simply have NOT got is suitable data to make a 'nice' image - and that is just down to local weather, and rudimentary equipment. It is very difficult to get enough data on a single target when there is really not much more than ONE opportunity to image that object each year. It really is that simple - I get, on average, ONE suitable evening per month - and, sometimes, not even that.

Niall, as a future project I would consider seriously the Taka Epsilon 180ED. At f2.8, it's really fast and you will reach higher signal to noise ratios quickly. Also it's easy to guide (I was able to do 30 min exp with 5 um pixels).

But for the moment, your equipment and conditions are perfect to develop good techniques.  ;)


Regards,
Vicent.

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 5811
    • View Profile
    • http://pixinsight.com/
Re: Second dumb question-about LRBG blending
« Reply #13 on: 2010 January 16 11:17:07 »
Quote
the video [...] it will be published [...]

That's a contractual statement!  >:D
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline dhalliday

  • PixInsight Old Hand
  • ****
  • Posts: 307
    • View Profile
    • on Flickr
Re: Second dumb question-about LRBG blending
« Reply #14 on: 2010 January 16 12:14:09 »
Folks...
Thanks again..!!
Its slowly becoming clearer...
I took Niall's suggestion and "deconstructed" a (color) Jpeg (a very good one of M1,off the web)..
It was quite easy to "reconstitute"...

So I am going to stick to RBG...(I tried to ask about this in a (SBIG) group,and think I offended someone... >:D)
I have a (humble) fixed setup,and at least am not in the UK..(!!) >:D..so aquisition time is "OK"..

I am wondering,just in general this morning about a whole bunch of things...
It seems that the processing of "luminance" and "chrominance" is quite different somehow...
Possibly this is one of the strengths of Pixinsight...
But then...how many Mono CCD imagers are there out there..and how many DSLR types..?
What is the ratio and total number..?

I ask this because I continue to wonder why Pix cannot get a bit more popular?
Maybe there is a worldwide shortage of folks actually DOING this stuff...??
Or they are using DSLR's...and need less processing?
Also I am struck by the added complexity of working with (assembled) color images...
Perhaps this is just too daunting (in Pixinsight) for many...(ie the  Lum/Chrom stuff..)

Was the DSLR market your primary target..?
Is it 10 times the CCD market..?
Five ?,twenty ?...just curious....I am an MD...but have a secret fascination with business,$$ >:D

ANYWAYS...
Let me recap;
1)I should combine RBG first..
2)I can/should(?) obtain a "synthetic L" from this result
3) I then can/should(?) do a LRBG combine...?..(can you tell me why I want to do this again,as opposed to just beavering away on the RBG combine...?)..There seems to be ALL sorts of extra/yummy options on the "LRBG" blend gizmo than on the RBG blend gizmo...(?)

I am going to (sadly) re/re reask one of my questions...
How do I know when the RBG data is "ready to be blended"..?
Do I just do ABE on each,and try to get similar Histo positions.?
Can I do ACDNR on each first...or is this pointless..(ie just do it to the blend...)

I also must say that "star alignment",..used on the 3 RBG images gives me 3 "registered" images"...but the histo for "red" looks different from the histo for "red registered"...
I have been using "Dynamic Alignment"..(which is kinda fun actually)...but clumsy as I seem to have to turn it on and off each time to realign the next image.......??

Just one last question for now (!!)...
In ABE...for individual RBG stacks...do I ask for "normalization"..??...what is this doing?it sounds like it does not apply to a mono image..
What about with color images..?
I am off to look at Harry's video again and see if I can get things clearer....

Thanks for your patience...I hope you type faster than I do...

Dave


 
Dave Halliday
8" Newtonian/Vixen VC200L/ TV 101,etc etc
SSAG/EQ6
CGE Pro
SBIG ST2K,ST10XME