PixInsight Forum (historical)

PixInsight => Tutorials and Processing Examples => Topic started by: sreilly on 2011 February 13 22:35:10

Title: New User Quick Start
Post by: sreilly on 2011 February 13 22:35:10
I have no idea if this is of any value but a friend asked how I used PI and this was a better way for me to explain. Any feedback would be appreciated. See http://www.astral-imaging.com/pix_insight_a_new_user.htm  (http://www.astral-imaging.com/pix_insight_a_new_user.htm)
Title: Re: New User Quick Start
Post by: RBA on 2011 February 14 02:26:56
Yeah, you're missing a link to the unofficial reference guide  8)
http://blog.deepskycolors.com/PixInsight/

Simply unforgivable!! ;)

Title: Re: New User Quick Start
Post by: sreilly on 2011 February 14 05:46:03
You are absolutely right! What a resource to forget. It's there now. Thanks for the reminder.

Title: Re: New User Quick Start
Post by: Jack Harvey on 2011 February 14 07:02:14
Nice work Steve!  I think this, along with the variety of other tutorials and videos is great.  Of course if you saw the SBIG thread you know PI is way too expensive and there is no documentation at all ;)
Title: Re: New User Quick Start
Post by: Juan Conejero on 2011 February 14 08:38:10
Hi Steve,

Good job. I think deconvolution should go before the initial stretch, but there's actually no problem in using it for sharpening a nonlinear image, as long as you know that you're doing just that.

As Jack says, this is good material for new users. Thank you for your time and effort. We are also starting a series of new video tutorials right now, covering basic topics and step-by-step processing tasks. And I am starting to write more reference documentation again, too :)

As for other forums, discussion groups, etc, I decided to keep myself apart from them, long time ago. When the author of a software application participates in a public forum, other than his own support forum, it's too easy to start either acting like a door-to-door salesperson, or making desperate efforts to defend himself. Both are pathetic IMO. Only the software that we write must speak for ourselves, and only our projects must tell what we are and what we are capable of. Then there are people that want to hear our 'music' and people that don't. That's it. And that's fine :)
Title: Re: New User Quick Start
Post by: sleshin on 2011 February 14 10:16:26
Very nice job, Steve. Thanks for taking the time to do this. With regards to the HDR page, I think HDR Wavelets should be applied to non-linear images.

Juan, glad to hear you're working on new videos, look forward to their release.

Steve
Title: Re: New User Quick Start
Post by: sreilly on 2011 February 14 16:15:55
I think deconvolution should go before the initial stretch, but there's actually no problem in using it for sharpening a nonlinear image, as long as you know that you're doing just that.


I can see that but I don't understand how this is accomplished. I went back to my M74 image which is 20 - 20 minute luminance images that was calibrated, registered, and then combined using average and linear fit rejection. DBE was performed and the image saved. Saved that image as a 64 bit image and then opened deconvolution and tried 5 iteration of Regularized LR which resulted in a very molted looking image. Tried regularized Van Cittert as well as and had ringing around the stars even though deringing was check. This was simply opening the image and using the settings as seen here http://www.astral-imaging.com/pi_processing_decon.htm (http://www.astral-imaging.com/pi_processing_decon.htm) Difference is the 5 iterations instead of 10 and trying both RLR and RVC. I used STF to see the affect on the images. No mask was used. I tried with a mask I had made after HST was used before any of this and inverted the mask. Results were even worse. I guess I haven't a clue as to how to use this process other than sharpening as I was doing after HST. I've uploaded the 64 bit fit file to  www.astral-imaging.com/integration20L.fit (http://www.astral-imaging.com/integration20L.fit) if anyone wants to try and let me know what I'm doing wrong. This is a 23+ MB image. I tried to create a mask while in linear data and what I used was the same settings as after HST. The results while linear was that just a few bright stars were selected. I seem to get better masks using non-linear data. Is this to be expected or do I need to dig deeper creating masks on linear data? In other software I've used in the past, deconvolution was done on the entire image, no masks used. Is this preferable?
Title: Re: New User Quick Start
Post by: Juan Conejero on 2011 February 15 04:08:08
Hi Steve,

Take a look at this little tutorial that I made with an image by Sander Pool:

http://pixinsight.com/forum/index.php?topic=2727.msg18512#msg18512

A few things more:

- Masks for deconvolution must be created from a stretched image. Make a duplicate of your (linear) luminance image, stretch it and activate it as a mask for the original image. However, you'll find that masks aren't necessary in most cases if you use regularized deconvolution algorithms.

- 64-bit images are normally not necessary. We use them to store very large HDR compositions, and also in some special cases where we need very high numerical accuracy. For normal image processing purposes, 32-bit floating point (or 32-bit integer for even more accuracy) is sufficient.

- In general, the Van Cittert algorithm is not an appropriate choice for deep-sky images. Use regularized Richardson-Lucy (RRL) instead.

- 10 iterations of RRL are usually too few. For a linear image, we normally apply from 50 to 200 iterations, depending on the signal-to-noise ratio. The number of iterations is not critical though. Thanks to regularization the algorithm stabilizes and usually there's very little difference —if any at all— between 50 and 100 iterations for example.

Hope this helps
Title: Re: New User Quick Start
Post by: sreilly on 2011 February 15 15:43:16
Juan,

Thank you. This is the most through tutorial I've seen on using any process or tool. I asked about the process to obtain the PSF for deconvolution but until I understand that I've still been able to use your example and make great strides in a basic understanding of how the process works.
Title: Re: New User Quick Start
Post by: dsnay on 2011 February 17 05:05:26
Hi Steve,

Good job. I think deconvolution should go before the initial stretch, but there's actually no problem in using it for sharpening a nonlinear image, as long as you know that you're doing just that.

As Jack says, this is good material for new users. Thank you for your time and effort. We are also starting a series of new video tutorials right now, covering basic topics and step-by-step processing tasks. And I am starting to write more reference documentation again, too :)

As for other forums, discussion groups, etc, I decided to keep myself apart from them, long time ago. When the author of a software application participates in a public forum, other than his own support forum, it's too easy to start either acting like a door-to-door salesperson, or making desperate efforts to defend himself. Both are pathetic IMO. Only the software that we write must speak for ourselves, and only our projects must tell what we are and what we are capable of. Then there are people that want to hear our 'music' and people that don't. That's it. And that's fine :)

Wisdom. It's a beautiful thing that sadly usually only comes with age.
Title: Re: New User Quick Start
Post by: dsnay on 2011 February 17 05:22:38
It's funny how similar your workflow is to mine. I'm still a PixInsight rookie, but here's what I've settled on as a starting point.


Calibrate in Nebulosity (it just works better for me at this point}
Move to PI

For me, all roads lead to PS. In some ways PixInsight is both shortening and lengthening my overall path.
I've found that many of my tasks that required significant effort in Photoshop are easier (now that I'm starting to understand enough about PI to ask intelligent - I hope - questions). My path is shorter in that I'm spending less time in PS, but more time overall as there is more I can accomplish in PI.

Of course, if I were able to acquire better data my path would be significantly shorter. I once rented time on a really great setup in the Sierra's and processing that data took about 10 minutes.

Dave

Title: Re: New User Quick Start
Post by: sreilly on 2011 February 17 08:56:53
Calibrate in Nebulosity (it just works better for me at this point}
Move to PI
  • Align and Integrate color sets
  • ChannelCombination to form RGB image
  • Background Neutraliztion (maybe)
  • Color Calibration (maybe)
  • DBE (almost always - thanks to light dome of neighboring city)
  • HistogramTransfer (several iterations)
  • HDRWavelets (lots of experimentation on each image here
  • ATrousWavelet (Just starting to play with this)
  • Repeat DBE through at least HDRWavelets for the Luminance data
  • LRGBCombination
  • Off to Photoshop for final tweaks

For me, all roads lead to PS. In some ways PixInsight is both shortening and lengthening my overall path.
I've found that many of my tasks that required significant effort in Photoshop are easier (now that I'm starting to understand enough about PI to ask intelligent - I hope - questions). My path is shorter in that I'm spending less time in PS, but more time overall as there is more I can accomplish in PI.

Of course, if I were able to acquire better data my path would be significantly shorter. I once rented time on a really great setup in the Sierra's and processing that data took about 10 minutes.

IMHO, the amount of time spent on processing an image should be a minor issue where the quality of the final image should be most important. As for your work flow, Background Neutralization, Color Calibration, and DBE all balance the color so these are redundant. I can't see you getting different results with each of these steps. If you have gradients then DBE, if properly applied, should take care of the gradients as well as balance the color.

The difference I see in the black point between PI and PS is about the only thing short of creating my web JPGs I use PS for unless I have a bit of noise I want to reduce. Then I use a masking routine in PS to selectively reduce color noise. That said, until I learn the proper way to do this in PI, it's an efficiency thing for me as I own both software's already. If I didn't already have PS I certainly wouldn't buy it for this as PI is extremely capable. As for the black point, I need to pay close attention as to where this is in both programs. As PI shows how much you are clipping it should be a simple matter of seeing where this shows on the levels in PS.
Title: Re: New User Quick Start
Post by: dsnay on 2011 February 17 11:22:25

IMHO, the amount of time spent on processing an image should be a minor issue where the quality of the final image should be most important. As for your work flow, Background Neutralization, Color Calibration, and DBE all balance the color so these are redundant. I can't see you getting different results with each of these steps. If you have gradients then DBE, if properly applied, should take care of the gradients as well as balance the color.

The difference I see in the black point between PI and PS is about the only thing short of creating my web JPGs I use PS for unless I have a bit of noise I want to reduce. Then I use a masking routine in PS to selectively reduce color noise. That said, until I learn the proper way to do this in PI, it's an efficiency thing for me as I own both software's already. If I didn't already have PS I certainly wouldn't buy it for this as PI is extremely capable. As for the black point, I need to pay close attention as to where this is in both programs. As PI shows how much you are clipping it should be a simple matter of seeing where this shows on the levels in PS.

I'm not sure I agree that Background Neutralization, Color Calibration and DBE are redundant. I agree that they don't all need to be used on the same image (and probably shouldn't), but they definitely seem to do different things.

My understanding (limited though it may be) is:

Background Neutralization will try to even out an overall color cast to the background, while leaving major objects alone.
Color Calibration will try to adjust colors throughout the image based on the models described elsewhere in these forums.
DBE is best used to eliminate color gradients.

These sound like three different goals which should be used when appropriate. In my case, it actually sounds like the Background Neutralization and DBE will be necessary in most images and color calibration will depend on how I feel about a given image.

As always, I am willing to be shown the error of my logic. After all, it's the only way I learn!

Clear skies!
Dave
Title: Re: New User Quick Start
Post by: Astrocava on 2011 February 18 05:19:56
I usually apply DBE before BackgroundNeutralization and ColorCalibration. Why? I think they will work better with a image without gradients. I usually have heavy chromatic gradients that will cause bad ColorCalibration if I don't apply a DBE first. If you have modeled the background for DBE with care and with enough samples, you don't need to apply BackgroundNeutralization.

So, I usually apply DBE and then ColorCalibration. For me these tools aren't redundant.

Have fun!

Sergio
Title: Re: New User Quick Start
Post by: sreilly on 2011 February 18 07:35:29

I'm not sure I agree that Background Neutralization, Color Calibration and DBE are redundant. I agree that they don't all need to be used on the same image (and probably shouldn't), but they definitely seem to do different things.

I thought you were using all three on the same image for color balance and therefore the redundant remark. Truth be told I'm a long way from understanding the proper use of most of these processes but gradually trying a little something new over time. The ATrouswavelet is something I'd like to try but haven't really looked at yet.
Title: Re: New User Quick Start
Post by: zvrastil on 2011 February 18 11:23:17
It would be good if Vicent other color calibration guru could comment, but here is how I understand and use these tools:
Pixel values in an image can be expressed with equation P = k * ( S + B ). P is the pixel value, S is the signal from space, B is the signal from atmosphere (sky glow, light pollution) and k is the number expressing the sensitivity of our camera to particular color. Our ultimate goal is to have only S in our image. It is clear that we have to divide our pixels by k first and then we can subtract the background B. After these two steps, pixels containing no star or object should have neutral color. Please note that B can vary from pixel to pixel, creating gradients.
ColorCalibration is tool to remove k factor from the equation. DBE removes B component. If you use just DBE without color calibration, you remove k*B, leaving k*S. You still need to use ColorCalibration to remove k. Your background is neutral, but color of your objects is not correct - it is still affected by spectral sensitivity of your filters/camera.

Please note that if you're using same camera and same set of filters, color calibration values should be mostly constant (with an exception of imaging close to horizon, due to atmospheric extinction). ColorCalibration module works on spiral galaxy as white reference. I would suggest to get calibration coefficients on one image of the galaxy and use them for all images. Of course, if your new image is galaxy as well, you can re-run it. If you know your color calibration values, you can apply them easily with ColorCalibration module with "Manual white balance" option checked.

After applying CC coefficients, you should not expect your background to be neutral - CC simply assigned your gradient its "correct" hue. Your image is now P = S+B.

Now it's time to use DBE and correct image with Subtraction. This removes B and leaves objects with correct color and neutral background.

I personally think that using BackgroundNeutralization step is not needed as long as you provide background reference to ColorCalibration. But I may be wrong.

regards, Zbynek
Title: Re: New User Quick Start
Post by: RBA on 2011 February 18 13:13:49
I'm not sure I agree that Background Neutralization, Color Calibration and DBE are redundant.

They're not. Each in fact serves a very different purpose. And their names quite nail what they're mainly good for.

Also, it may be hard - if not impossible - to do proper background neutralization before removing gradients, and proper color balancing before background neutralization. So they all aid, among other things, in achieving proper color balance.

Title: Re: New User Quick Start
Post by: zvrastil on 2011 February 18 13:40:16
Hi Rogelio,

there's something I don't understand. I should note that I'm working with color images from DSLR. Now, if I extract gradients with DBE and subtract them from my image, I get already neutral background. Why would I need BackgroundNeutralization for? Or is it only problem of monochromatical camera and LRGB exposures, where you remove gradients for each channel separately, before combining them together to create color image?

thanks, Zbynek
Title: Re: New User Quick Start
Post by: RBA on 2011 February 18 14:40:44
I don't know about DSLR or OSC work (when I used DSLRs I didn't do many of these things)...
But DBE surely does not give me a neutral background in any way or form.

With mono cameras you don't have to run DBE on each channel separately - the DBE tool does that for you. Still I choose to do it separately because that gives me the chance to better review the background model generated, and evaluate whether the model does indeed looks like the gradient I'm trying to remove or it has also been modeled after valid faint signal.



Hi Rogelio,

there's something I don't understand. I should note that I'm working with color images from DSLR. Now, if I extract gradients with DBE and subtract them from my image, I get already neutral background. Why would I need BackgroundNeutralization for? Or is it only problem of monochromatical camera and LRGB exposures, where you remove gradients for each channel separately, before combining them together to create color image?

thanks, Zbynek
Title: Re: New User Quick Start
Post by: sreilly on 2011 February 18 14:52:00
Also, it may be hard - if not impossible - to do proper background neutralization before removing gradients, and proper color balancing before background neutralization. So they all aid, among other things, in achieving proper color balance.

I may be wrong here but I've found using DBE on large nebula images where the nebula is throughout the image to be less than necessary. Maybe I'm lucky enough to not have strong enough gradients to deal with that nebula images hide what there is. I do find that I use DBE on most all of my galaxy images. Using DBE does seem to neutralize the background and balance the color for me. I've also taken to checking the color by using eXcalibrator and see what they come up with fro a RGB ratio. Using the largest value, for example 1:1.605:2.757, I'll divide each value by the largest to derive a RGB ratio that PI can use as a value of 1 is the highest, at least in the LRGB Combine method. For this example I get 0.3627:0.5821:1.

The advantage I see to using PI for color balance is that I can use odd number groups to combine the RGB image with. As an example I did NGC2359 and ended up with 9 good blue 20 minute images, 8 good 20 minute red, and 6 good 20 minute green. Before I use to have an even number of each filter, in this case 6, for each filter and combine for the RGB but that meant not using the other good data. With PI I use it all and then color balance. In the case of NGC2359, Thor's Helmet, I did not use DBE but instead did the average combine of each filter creating the master for each and then combined using LRGB Combine. Then I cropped the image for good edges and saved. Next I used Histogram Transformation to stretch the image, created the mask, saved the mask, discarded the stretched image and opened the original RGB cropped image. After applying the mask, deconvolution, and saving the image I then did my histogram stretch and saved. At that point I used background neutralization, then HDRW and saved . So far no color balance. Looking at the image I didn't see a need after the background was neutralized. I don't think I forgot any of the steps used on this image. The last thing I did was a slight color saturation boost using curves color saturation. You can see the image here http://www.astral-imaging.com/NGC2359%20-%20Thor%27s%20Helmet%20Revised.htm (http://www.astral-imaging.com/NGC2359%20-%20Thor%27s%20Helmet%20Revised.htm)
Title: Re: New User Quick Start
Post by: RBA on 2011 February 18 15:24:07
Now that you mention it, the DBE tool has a "normalize" checkbox (I just never use it because I usually apply DBE to each channel individually)...

As for not using DBE  "on large nebula images where the nebula is throughout the image"... It all depends.

If you have gradients, you have gradients. And if you don't deal with them, they'll be there, whether the image is "all nebula" or a tiny galaxy surrounded by "empty" space. In some cases they may be more or less obvious, that depends on how strong the gradient is, the size of the FOV (the bigger the FOV, more chances the gradient is more obvious), etc. Choose your compromise on a case by case basis.

The other story is... how easy/difficult is to get rid of them, an on that, I feel one could write a book, or at least a good chapter.

I used to be quite dumb about gradient removal until I started to do mosaics where each pane is usually 5x3 degrees. And I won't claim I'm an expert by now, but to me, one of the best "tricks", as I've said numerous times, is to examine the (stretched) background model, modify parameters/samples, try again and examine the new model. A lot can be learned that way. In fact, and this is absolutely true, after a DBE I always examine the background model first, then - and not always - the "corrected" image. No point in keeping a DBE-corrected image if the background model looks like some sort of monochrome psychedelic piece of art instead of a gradient. Of course, this can only work when you deal with each channel separately - something you can also do with OSC images by extracting the RGB channels. Still, there's a lot more to it...

Weren't you at last AIC? I may be mistaken... It was quite interesting, particularly for one reason that at least I shouldn't post publicly.








Title: Re: New User Quick Start
Post by: dsnay on 2011 February 18 15:28:08
Also, it may be hard - if not impossible - to do proper background neutralization before removing gradients, and proper color balancing before background neutralization. So they all aid, among other things, in achieving proper color balance.

I may be wrong here but I've found using DBE on large nebula images where the nebula is throughout the image to be less than necessary. Maybe I'm lucky enough to not have strong enough gradients to deal with that nebula images hide what there is. I do find that I use DBE on most all of my galaxy images. Using DBE does seem to neutralize the background and balance the color for me. I've also taken to checking the color by using eXcalibrator and see what they come up with fro a RGB ratio. Using the largest value, for example 1:1.605:2.757, I'll divide each value by the largest to derive a RGB ratio that PI can use as a value of 1 is the highest, at least in the LRGB Combine method. For this example I get 0.3627:0.5821:1.

The advantage I see to using PI for color balance is that I can use odd number groups to combine the RGB image with. As an example I did NGC2359 and ended up with 9 good blue 20 minute images, 8 good 20 minute red, and 6 good 20 minute green. Before I use to have an even number of each filter, in this case 6, for each filter and combine for the RGB but that meant not using the other good data. With PI I use it all and then color balance. In the case of NGC2359, Thor's Helmet, I did not use DBE but instead did the average combine of each filter creating the master for each and then combined using LRGB Combine. Then I cropped the image for good edges and saved. Next I used Histogram Transformation to stretch the image, created the mask, saved the mask, discarded the stretched image and opened the original RGB cropped image. After applying the mask, deconvolution, and saving the image I then did my histogram stretch and saved. At that point I used background neutralization, then HDRW and saved . So far no color balance. Looking at the image I didn't see a need after the background was neutralized. I don't think I forgot any of the steps used on this image. The last thing I did was a slight color saturation boost using curves color saturation. You can see the image here http://www.astral-imaging.com/NGC2359%20-%20Thor%27s%20Helmet%20Revised.htm (http://www.astral-imaging.com/NGC2359%20-%20Thor%27s%20Helmet%20Revised.htm)


The mask could be the key to the differences of opinion here. By masking off the nebula, you're letting histogram stretch work on just the background. By the way, how are you making the mask. That's something I haven't figured out yet in PI. It's probably quite simple, I just haven't gone looking for guidance yet.

Dave
Title: Re: New User Quick Start
Post by: sreilly on 2011 February 18 16:20:22
Now that you mention it, the DBE tool has a "normalize" check box (I just never use it because I usually apply DBE to each channel individually)...

I don't have this checked either and never have. I did a quick mini test using just the RGB data set and applied DBE only, background neutralization only, and color calibration only saving each result. I then used histogram stretch on each image although probably not as close on each unfortunately. I can't really see any visual difference in the color balance between the DBE and background neutralization images. The one that has only color correction is way off as the three channels are not together. Using the background neutralization tool did bring into line. Unfortunately they have some slight differences in how much they were stretched but that's all I really see.

Weren't you at last AIC? I may be mistaken... It was quite interesting, particularly for one reason that at least I shouldn't post publicly.

I've been there three times but the last was several years ago. I was at the first two however.

See here for the 5 examples http://www.astral-imaging.com/NGC2359%20-%20Thor%27s%20Helmet%20PI%20CC%20Examples.htm (http://www.astral-imaging.com/NGC2359%20-%20Thor%27s%20Helmet%20PI%20CC%20Examples.htm)
Title: Re: New User Quick Start
Post by: sreilly on 2011 February 18 16:24:20
The mask could be the key to the differences of opinion here. By masking off the nebula, you're letting histogram stretch work on just the background. By the way, how are you making the mask. That's something I haven't figured out yet in PI. It's probably quite simple, I just haven't gone looking for guidance yet.

Dave

The mask is only being generated for the deconvolution and HDRW processes. The histogram is to the entire image. The mask is used for deconvolution applied as is while the HDRW process the mask is inverted.
Title: Re: New User Quick Start
Post by: RBA on 2011 February 18 16:34:15
See here for the 5 examples http://www.astral-imaging.com/NGC2359%20-%20Thor%27s%20Helmet%20PI%20CC%20Examples.htm (http://www.astral-imaging.com/NGC2359%20-%20Thor%27s%20Helmet%20PI%20CC%20Examples.htm)

IMHO that FOV is somewhat small for having a significant gradiente, but then, I haven't processed many (any?) images at that scale.

You'll still have skyglow affecting the image, but it may not manifest as a gradient. In that case I may just go with a BN and maybe CC or some other color balancing method, but again, I'm not the right person to talk to for images at that scale... Last time I went for that area, this is what I came up with http://deepskycolors.com/pics/astro/2010/03/mb_2010-03-10_SeaGullThor.jpg  ;D (and I wasn't using the same methods I use now)...


Title: Re: New User Quick Start
Post by: dsnay on 2011 February 18 16:41:19
The mask could be the key to the differences of opinion here. By masking off the nebula, you're letting histogram stretch work on just the background. By the way, how are you making the mask. That's something I haven't figured out yet in PI. It's probably quite simple, I just haven't gone looking for guidance yet.

Dave

The mask is only being generated for the deconvolution and HDRW processes. The histogram is to the entire image. The mask is used for deconvolution applied as is while the HDRW process the mask is inverted.

Okay, thanks for clearing that up for me. However, how are you generating the mask. It sounded like it was being generated, saved and then applied to other processes.

Dave
Title: Re: New User Quick Start
Post by: sreilly on 2011 February 18 16:45:50
See here for the 5 examples http://www.astral-imaging.com/NGC2359%20-%20Thor%27s%20Helmet%20PI%20CC%20Examples.htm (http://www.astral-imaging.com/NGC2359%20-%20Thor%27s%20Helmet%20PI%20CC%20Examples.htm)

IMHO that FOV is somewhat small for having a significant gradiente, but then, I haven't processed many (any?) images at that scale.

You'll still have skyglow affecting the image, but it may not manifest as a gradient. In that case I may just go with a BN and maybe CC or some other color balancing method, but again, I'm not the right person to talk to for images at that scale... Last time I went for that area, this is what I came up with http://deepskycolors.com/pics/astro/2010/03/mb_2010-03-10_SeaGullThor.jpg  ;D (and I wasn't using the same methods I use now)...


The FOV is 18x12 arc minutes at .48 arc seconds per pixel versus 96x65 arc minutes at 2.65 arc seconds per pixel with the FSQ-106 and the ST10. I can and do see some gradients on galaxy images and DBE does a wonderful job handling them.  Your imaging mosaics are a work of art and cover a much larger area than I do. I may in the future try some mosaics but with a camera with a bit more real estate to make the job a bit easier.
Title: Re: New User Quick Start
Post by: sreilly on 2011 February 18 16:51:47

Okay, thanks for clearing that up for me. However, how are you generating the mask. It sounded like it was being generated, saved and then applied to other processes.

Dave

The mask needs to be created with non-linear data, therefor it has been stretched using the histogram tool. You save the mask like any other image using File | Save As. The mask geometry needs to be the exact same as the image it is applied to so if you need to crop your image, do so first before creating the mask. See this page on deconvolution on linear data using a mask. http://www.astral-imaging.com/pi_processing_properdecon.htm (http://www.astral-imaging.com/pi_processing_properdecon.htm) The information contained in it is from Juan's reply to using deconvolution properly.

After that resulting image is saved I'll usually do HDRWavelets but this time the mask is inverted to work on the object only (nebula in this case).
Title: Re: New User Quick Start
Post by: budguinn on 2011 June 01 19:18:09
this is very helpful Steve, thanks for taking the time to make the site and tutorials available....and the links are very helpful.

bud
Title: Re: New User Quick Start
Post by: RobF2 on 2011 June 20 03:55:33
I used to be quite dumb about gradient removal until I started to do mosaics where each pane is usually 5x3 degrees. And I won't claim I'm an expert by now, but to me, one of the best "tricks", as I've said numerous times, is to examine the (stretched) background model, modify parameters/samples, try again and examine the new model.

 ;)  You'd have to have won at least an APOD or two to be an expert though, surely?    >:D
We're really looking forward to sucking your brains out when you come to the Aussie AIC on the Gold Coast in a few weeks BTW Rogelio.

Great job on the User's guide too Steve.  Anything that helps people get their head around PI (or makes old users rethink their workflow) has to be a good thing.

R
Title: Re: New User Quick Start
Post by: RBA on 2011 June 21 22:54:38
We're really looking forward to sucking your brains out when you come to the Aussie AIC on the Gold Coast in a few weeks BTW Rogelio.

Yeah, I look forward to that too! Well, everything but the sucking part   ;)