Author Topic: New User Quick Start  (Read 20529 times)

Offline zvrastil

  • PixInsight Addict
  • ***
  • Posts: 179
    • Astrophotography
Re: New User Quick Start
« Reply #15 on: 2011 February 18 11:23:17 »
It would be good if Vicent other color calibration guru could comment, but here is how I understand and use these tools:
Pixel values in an image can be expressed with equation P = k * ( S + B ). P is the pixel value, S is the signal from space, B is the signal from atmosphere (sky glow, light pollution) and k is the number expressing the sensitivity of our camera to particular color. Our ultimate goal is to have only S in our image. It is clear that we have to divide our pixels by k first and then we can subtract the background B. After these two steps, pixels containing no star or object should have neutral color. Please note that B can vary from pixel to pixel, creating gradients.
ColorCalibration is tool to remove k factor from the equation. DBE removes B component. If you use just DBE without color calibration, you remove k*B, leaving k*S. You still need to use ColorCalibration to remove k. Your background is neutral, but color of your objects is not correct - it is still affected by spectral sensitivity of your filters/camera.

Please note that if you're using same camera and same set of filters, color calibration values should be mostly constant (with an exception of imaging close to horizon, due to atmospheric extinction). ColorCalibration module works on spiral galaxy as white reference. I would suggest to get calibration coefficients on one image of the galaxy and use them for all images. Of course, if your new image is galaxy as well, you can re-run it. If you know your color calibration values, you can apply them easily with ColorCalibration module with "Manual white balance" option checked.

After applying CC coefficients, you should not expect your background to be neutral - CC simply assigned your gradient its "correct" hue. Your image is now P = S+B.

Now it's time to use DBE and correct image with Subtraction. This removes B and leaves objects with correct color and neutral background.

I personally think that using BackgroundNeutralization step is not needed as long as you provide background reference to ColorCalibration. But I may be wrong.

regards, Zbynek
« Last Edit: 2011 February 18 13:27:45 by zvrastil »

Offline RBA

  • PixInsight Guru
  • ****
  • Posts: 511
    • DeepSkyColors
Re: New User Quick Start
« Reply #16 on: 2011 February 18 13:13:49 »
I'm not sure I agree that Background Neutralization, Color Calibration and DBE are redundant.

They're not. Each in fact serves a very different purpose. And their names quite nail what they're mainly good for.

Also, it may be hard - if not impossible - to do proper background neutralization before removing gradients, and proper color balancing before background neutralization. So they all aid, among other things, in achieving proper color balance.


Offline zvrastil

  • PixInsight Addict
  • ***
  • Posts: 179
    • Astrophotography
Re: New User Quick Start
« Reply #17 on: 2011 February 18 13:40:16 »
Hi Rogelio,

there's something I don't understand. I should note that I'm working with color images from DSLR. Now, if I extract gradients with DBE and subtract them from my image, I get already neutral background. Why would I need BackgroundNeutralization for? Or is it only problem of monochromatical camera and LRGB exposures, where you remove gradients for each channel separately, before combining them together to create color image?

thanks, Zbynek

Offline RBA

  • PixInsight Guru
  • ****
  • Posts: 511
    • DeepSkyColors
Re: New User Quick Start
« Reply #18 on: 2011 February 18 14:40:44 »
I don't know about DSLR or OSC work (when I used DSLRs I didn't do many of these things)...
But DBE surely does not give me a neutral background in any way or form.

With mono cameras you don't have to run DBE on each channel separately - the DBE tool does that for you. Still I choose to do it separately because that gives me the chance to better review the background model generated, and evaluate whether the model does indeed looks like the gradient I'm trying to remove or it has also been modeled after valid faint signal.



Hi Rogelio,

there's something I don't understand. I should note that I'm working with color images from DSLR. Now, if I extract gradients with DBE and subtract them from my image, I get already neutral background. Why would I need BackgroundNeutralization for? Or is it only problem of monochromatical camera and LRGB exposures, where you remove gradients for each channel separately, before combining them together to create color image?

thanks, Zbynek

Offline sreilly

  • PixInsight Padawan
  • ****
  • Posts: 791
    • Imaging at Dogwood Ridge Observatory
Re: New User Quick Start
« Reply #19 on: 2011 February 18 14:52:00 »
Also, it may be hard - if not impossible - to do proper background neutralization before removing gradients, and proper color balancing before background neutralization. So they all aid, among other things, in achieving proper color balance.

I may be wrong here but I've found using DBE on large nebula images where the nebula is throughout the image to be less than necessary. Maybe I'm lucky enough to not have strong enough gradients to deal with that nebula images hide what there is. I do find that I use DBE on most all of my galaxy images. Using DBE does seem to neutralize the background and balance the color for me. I've also taken to checking the color by using eXcalibrator and see what they come up with fro a RGB ratio. Using the largest value, for example 1:1.605:2.757, I'll divide each value by the largest to derive a RGB ratio that PI can use as a value of 1 is the highest, at least in the LRGB Combine method. For this example I get 0.3627:0.5821:1.

The advantage I see to using PI for color balance is that I can use odd number groups to combine the RGB image with. As an example I did NGC2359 and ended up with 9 good blue 20 minute images, 8 good 20 minute red, and 6 good 20 minute green. Before I use to have an even number of each filter, in this case 6, for each filter and combine for the RGB but that meant not using the other good data. With PI I use it all and then color balance. In the case of NGC2359, Thor's Helmet, I did not use DBE but instead did the average combine of each filter creating the master for each and then combined using LRGB Combine. Then I cropped the image for good edges and saved. Next I used Histogram Transformation to stretch the image, created the mask, saved the mask, discarded the stretched image and opened the original RGB cropped image. After applying the mask, deconvolution, and saving the image I then did my histogram stretch and saved. At that point I used background neutralization, then HDRW and saved . So far no color balance. Looking at the image I didn't see a need after the background was neutralized. I don't think I forgot any of the steps used on this image. The last thing I did was a slight color saturation boost using curves color saturation. You can see the image here http://www.astral-imaging.com/NGC2359%20-%20Thor%27s%20Helmet%20Revised.htm
Steve
www.astral-imaging.com
AP1200
OGS 12.5" RC
Tak FSQ-106ED
ST10XME/CFW8/AO8
STL-11000M/FW8/AO-L
Pyxis 3" Rotator
Baader LRGBHa Filters
PixInsight/MaxIm/ACP/Registar/Mira AP/PS CS5

Offline RBA

  • PixInsight Guru
  • ****
  • Posts: 511
    • DeepSkyColors
Re: New User Quick Start
« Reply #20 on: 2011 February 18 15:24:07 »
Now that you mention it, the DBE tool has a "normalize" checkbox (I just never use it because I usually apply DBE to each channel individually)...

As for not using DBE  "on large nebula images where the nebula is throughout the image"... It all depends.

If you have gradients, you have gradients. And if you don't deal with them, they'll be there, whether the image is "all nebula" or a tiny galaxy surrounded by "empty" space. In some cases they may be more or less obvious, that depends on how strong the gradient is, the size of the FOV (the bigger the FOV, more chances the gradient is more obvious), etc. Choose your compromise on a case by case basis.

The other story is... how easy/difficult is to get rid of them, an on that, I feel one could write a book, or at least a good chapter.

I used to be quite dumb about gradient removal until I started to do mosaics where each pane is usually 5x3 degrees. And I won't claim I'm an expert by now, but to me, one of the best "tricks", as I've said numerous times, is to examine the (stretched) background model, modify parameters/samples, try again and examine the new model. A lot can be learned that way. In fact, and this is absolutely true, after a DBE I always examine the background model first, then - and not always - the "corrected" image. No point in keeping a DBE-corrected image if the background model looks like some sort of monochrome psychedelic piece of art instead of a gradient. Of course, this can only work when you deal with each channel separately - something you can also do with OSC images by extracting the RGB channels. Still, there's a lot more to it...

Weren't you at last AIC? I may be mistaken... It was quite interesting, particularly for one reason that at least I shouldn't post publicly.









Offline dsnay

  • PixInsight Addict
  • ***
  • Posts: 100
Re: New User Quick Start
« Reply #21 on: 2011 February 18 15:28:08 »
Also, it may be hard - if not impossible - to do proper background neutralization before removing gradients, and proper color balancing before background neutralization. So they all aid, among other things, in achieving proper color balance.

I may be wrong here but I've found using DBE on large nebula images where the nebula is throughout the image to be less than necessary. Maybe I'm lucky enough to not have strong enough gradients to deal with that nebula images hide what there is. I do find that I use DBE on most all of my galaxy images. Using DBE does seem to neutralize the background and balance the color for me. I've also taken to checking the color by using eXcalibrator and see what they come up with fro a RGB ratio. Using the largest value, for example 1:1.605:2.757, I'll divide each value by the largest to derive a RGB ratio that PI can use as a value of 1 is the highest, at least in the LRGB Combine method. For this example I get 0.3627:0.5821:1.

The advantage I see to using PI for color balance is that I can use odd number groups to combine the RGB image with. As an example I did NGC2359 and ended up with 9 good blue 20 minute images, 8 good 20 minute red, and 6 good 20 minute green. Before I use to have an even number of each filter, in this case 6, for each filter and combine for the RGB but that meant not using the other good data. With PI I use it all and then color balance. In the case of NGC2359, Thor's Helmet, I did not use DBE but instead did the average combine of each filter creating the master for each and then combined using LRGB Combine. Then I cropped the image for good edges and saved. Next I used Histogram Transformation to stretch the image, created the mask, saved the mask, discarded the stretched image and opened the original RGB cropped image. After applying the mask, deconvolution, and saving the image I then did my histogram stretch and saved. At that point I used background neutralization, then HDRW and saved . So far no color balance. Looking at the image I didn't see a need after the background was neutralized. I don't think I forgot any of the steps used on this image. The last thing I did was a slight color saturation boost using curves color saturation. You can see the image here http://www.astral-imaging.com/NGC2359%20-%20Thor%27s%20Helmet%20Revised.htm


The mask could be the key to the differences of opinion here. By masking off the nebula, you're letting histogram stretch work on just the background. By the way, how are you making the mask. That's something I haven't figured out yet in PI. It's probably quite simple, I just haven't gone looking for guidance yet.

Dave

Offline sreilly

  • PixInsight Padawan
  • ****
  • Posts: 791
    • Imaging at Dogwood Ridge Observatory
Re: New User Quick Start
« Reply #22 on: 2011 February 18 16:20:22 »
Now that you mention it, the DBE tool has a "normalize" check box (I just never use it because I usually apply DBE to each channel individually)...

I don't have this checked either and never have. I did a quick mini test using just the RGB data set and applied DBE only, background neutralization only, and color calibration only saving each result. I then used histogram stretch on each image although probably not as close on each unfortunately. I can't really see any visual difference in the color balance between the DBE and background neutralization images. The one that has only color correction is way off as the three channels are not together. Using the background neutralization tool did bring into line. Unfortunately they have some slight differences in how much they were stretched but that's all I really see.

Weren't you at last AIC? I may be mistaken... It was quite interesting, particularly for one reason that at least I shouldn't post publicly.

I've been there three times but the last was several years ago. I was at the first two however.

See here for the 5 examples http://www.astral-imaging.com/NGC2359%20-%20Thor%27s%20Helmet%20PI%20CC%20Examples.htm
Steve
www.astral-imaging.com
AP1200
OGS 12.5" RC
Tak FSQ-106ED
ST10XME/CFW8/AO8
STL-11000M/FW8/AO-L
Pyxis 3" Rotator
Baader LRGBHa Filters
PixInsight/MaxIm/ACP/Registar/Mira AP/PS CS5

Offline sreilly

  • PixInsight Padawan
  • ****
  • Posts: 791
    • Imaging at Dogwood Ridge Observatory
Re: New User Quick Start
« Reply #23 on: 2011 February 18 16:24:20 »
The mask could be the key to the differences of opinion here. By masking off the nebula, you're letting histogram stretch work on just the background. By the way, how are you making the mask. That's something I haven't figured out yet in PI. It's probably quite simple, I just haven't gone looking for guidance yet.

Dave

The mask is only being generated for the deconvolution and HDRW processes. The histogram is to the entire image. The mask is used for deconvolution applied as is while the HDRW process the mask is inverted.
Steve
www.astral-imaging.com
AP1200
OGS 12.5" RC
Tak FSQ-106ED
ST10XME/CFW8/AO8
STL-11000M/FW8/AO-L
Pyxis 3" Rotator
Baader LRGBHa Filters
PixInsight/MaxIm/ACP/Registar/Mira AP/PS CS5

Offline RBA

  • PixInsight Guru
  • ****
  • Posts: 511
    • DeepSkyColors
Re: New User Quick Start
« Reply #24 on: 2011 February 18 16:34:15 »
See here for the 5 examples http://www.astral-imaging.com/NGC2359%20-%20Thor%27s%20Helmet%20PI%20CC%20Examples.htm

IMHO that FOV is somewhat small for having a significant gradiente, but then, I haven't processed many (any?) images at that scale.

You'll still have skyglow affecting the image, but it may not manifest as a gradient. In that case I may just go with a BN and maybe CC or some other color balancing method, but again, I'm not the right person to talk to for images at that scale... Last time I went for that area, this is what I came up with http://deepskycolors.com/pics/astro/2010/03/mb_2010-03-10_SeaGullThor.jpg  ;D (and I wasn't using the same methods I use now)...



Offline dsnay

  • PixInsight Addict
  • ***
  • Posts: 100
Re: New User Quick Start
« Reply #25 on: 2011 February 18 16:41:19 »
The mask could be the key to the differences of opinion here. By masking off the nebula, you're letting histogram stretch work on just the background. By the way, how are you making the mask. That's something I haven't figured out yet in PI. It's probably quite simple, I just haven't gone looking for guidance yet.

Dave

The mask is only being generated for the deconvolution and HDRW processes. The histogram is to the entire image. The mask is used for deconvolution applied as is while the HDRW process the mask is inverted.

Okay, thanks for clearing that up for me. However, how are you generating the mask. It sounded like it was being generated, saved and then applied to other processes.

Dave

Offline sreilly

  • PixInsight Padawan
  • ****
  • Posts: 791
    • Imaging at Dogwood Ridge Observatory
Re: New User Quick Start
« Reply #26 on: 2011 February 18 16:45:50 »
See here for the 5 examples http://www.astral-imaging.com/NGC2359%20-%20Thor%27s%20Helmet%20PI%20CC%20Examples.htm

IMHO that FOV is somewhat small for having a significant gradiente, but then, I haven't processed many (any?) images at that scale.

You'll still have skyglow affecting the image, but it may not manifest as a gradient. In that case I may just go with a BN and maybe CC or some other color balancing method, but again, I'm not the right person to talk to for images at that scale... Last time I went for that area, this is what I came up with http://deepskycolors.com/pics/astro/2010/03/mb_2010-03-10_SeaGullThor.jpg  ;D (and I wasn't using the same methods I use now)...


The FOV is 18x12 arc minutes at .48 arc seconds per pixel versus 96x65 arc minutes at 2.65 arc seconds per pixel with the FSQ-106 and the ST10. I can and do see some gradients on galaxy images and DBE does a wonderful job handling them.  Your imaging mosaics are a work of art and cover a much larger area than I do. I may in the future try some mosaics but with a camera with a bit more real estate to make the job a bit easier.
Steve
www.astral-imaging.com
AP1200
OGS 12.5" RC
Tak FSQ-106ED
ST10XME/CFW8/AO8
STL-11000M/FW8/AO-L
Pyxis 3" Rotator
Baader LRGBHa Filters
PixInsight/MaxIm/ACP/Registar/Mira AP/PS CS5

Offline sreilly

  • PixInsight Padawan
  • ****
  • Posts: 791
    • Imaging at Dogwood Ridge Observatory
Re: New User Quick Start
« Reply #27 on: 2011 February 18 16:51:47 »

Okay, thanks for clearing that up for me. However, how are you generating the mask. It sounded like it was being generated, saved and then applied to other processes.

Dave

The mask needs to be created with non-linear data, therefor it has been stretched using the histogram tool. You save the mask like any other image using File | Save As. The mask geometry needs to be the exact same as the image it is applied to so if you need to crop your image, do so first before creating the mask. See this page on deconvolution on linear data using a mask. http://www.astral-imaging.com/pi_processing_properdecon.htm The information contained in it is from Juan's reply to using deconvolution properly.

After that resulting image is saved I'll usually do HDRWavelets but this time the mask is inverted to work on the object only (nebula in this case).
Steve
www.astral-imaging.com
AP1200
OGS 12.5" RC
Tak FSQ-106ED
ST10XME/CFW8/AO8
STL-11000M/FW8/AO-L
Pyxis 3" Rotator
Baader LRGBHa Filters
PixInsight/MaxIm/ACP/Registar/Mira AP/PS CS5

Offline budguinn

  • PixInsight Addict
  • ***
  • Posts: 106
Re: New User Quick Start
« Reply #28 on: 2011 June 01 19:18:09 »
this is very helpful Steve, thanks for taking the time to make the site and tutorials available....and the links are very helpful.

bud

Offline RobF2

  • PixInsight Addict
  • ***
  • Posts: 189
  • Rob
    • Rob's Astropics
Re: New User Quick Start
« Reply #29 on: 2011 June 20 03:55:33 »
I used to be quite dumb about gradient removal until I started to do mosaics where each pane is usually 5x3 degrees. And I won't claim I'm an expert by now, but to me, one of the best "tricks", as I've said numerous times, is to examine the (stretched) background model, modify parameters/samples, try again and examine the new model.

 ;)  You'd have to have won at least an APOD or two to be an expert though, surely?    >:D
We're really looking forward to sucking your brains out when you come to the Aussie AIC on the Gold Coast in a few weeks BTW Rogelio.

Great job on the User's guide too Steve.  Anything that helps people get their head around PI (or makes old users rethink their workflow) has to be a good thing.

R
FSQ106/8" Newt on NEQ6/HEQ5Pro via EQMOD | QHY9 | Guiding:  ZS80II/QHY5IIL | Canon 450D | DBK21 and other "stuff"
Rob's Astropics