Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - marekc

Pages: 1 ... 3 4 [5] 6 7 ... 12
If it's okay with Juan, I'm going to give the Warren/RBA tutorials an enthusiastic endorsement, because one of them really helped me out of a jam recently.

I shot some unbinned LRGB data on M33 back in the autumn, and I spent months trying to get the linear image deconvolved. I have had decent results with Deconvolution before, but I couldn't get either the Luminance image, nor the RGB image, to deconvolve. I kept getting subtle ringing, no matter what I did.

(In the end, I realized the futility of unbinned LRGB, and just went with the RGB data. But the problem still remained - I kept getting this ringing!)

After watching the IP4AP video(s?) about Deconvolution, I got a much better result! There is some straightforward advice on how to determine the PSF, and it made all the difference in the world. I don't want to say more, since giving away this information would feel a bit like pirating music, and I want RBA and Warren to be rewarded for their hard work. I'll say this, though: After beating my head against Deconvolution for so many months, it was well worth the money to get past that point!

The image in question is my most recent M33 image:

Thanks for the useful advice, guys! It helped me a lot!

- Marek

Gallery / M33 from September 2012, through an 80mm refractor
« on: 2013 February 16 13:14:23 »
Here's an image of M33, made from unbinned RGB data. I shot the data last fall, from a dark site in central California. I used my Orion ED80 semi-apo refractor, on a Losmandy G-11, with an SBIG ST-8300 mono camera. Subs were 15 minutes each, if I recall correctly. I wish I could remember how much total integration time went into the image, but I shot it over a few nights.

I'd try to embed the image, but it's taking forever (thanks to Flickr's changes to the way they handle their image links), and I'm burning daylight - I've got things to do. Here's a link to my blog post (which includes the image), and a link to the image on Flickr:

The hardest thing about this was Deconvolution. The magic solution was contained in Warren/RBA's video! I'm not going to give away the secret, but suffice it to say that their advice on how to get the PSF rescued me from months of frustration over Deconv!

- Marek

Wish List / Re: Quick Keyboard Access to Processes
« on: 2013 February 16 11:17:46 »
Thanks for making this suggestion, I agree that a Quicksilver-style "process launcher" would be great. I use Quicksilver as my main way of launching programs, and I really like it. Being able to bring up PI's Process Explorer (or something similar) with a user-defined key combination, followed by a "first-few-letters-based" selection method, would fit really well into my PI workflow.

(I must admit, though, that I need to study the PI keyboard shortcuts more, and I need to become more of a Quicksilver power user, instead of just using it as a program launcher.)

- Marek

General / Re: Pixinsight Gear
« on: 2013 February 10 22:17:45 »
I realize I'm geeking out here, but I would totally wear a shirt like that.

Maybe it will magically make me a better imager!   :tongue:

Either way, I think a simple design like that looks good.

- Marek

General / Re: Pixinsight Gear
« on: 2013 February 10 18:40:05 »
I could go for a bit of PI swag, too. Maybe a black T-shirt with some sort of PI logo, or a polo shirt, or a baseball hat. That would be the hip thing to wear at the next AIC, or when I'm volunteering at Lick.


Ah, you make a good point. I hadn't thought of that. I'll bet you're right - the LinearFit tool is probably not happy with the fact that one image has a single plane, while the other image has three planes.

Perhaps I should start a separate thread about this, but I guess my question raises a larger issue that I've never been very clear about... how best to match the histograms between an RGB and an L image, prior to doing the LRGB combination?

The PI instructional videos by Vicent show this being done with STF. I haven't had any luck with this, though. If I use the STF tool to do an AutoSTF on one image, I don't get a good result when I apply those same AutoSTF settings to the other image. (For example, the L image is always much brighter and more contrasty than the RGB image.) Same problem if I compute AutoSTF settings for each image individually.

Overall, I'm fairly confused regarding how to `match the histograms' between my linear RGB and L images, as I take them into the nonlinear realm for LRGB combination. I'm going to go search the forum and see if I can come up with anything; input from anybody would be appreciated, and I'm happy to take this to another thread if necessary.

- Marek

I'm trying to use the Linear Fit tool, but I'm having trouble with it. It's giving me a strange error message.

I have a linear RGB image, and I'm trying to use Linear Fit on a corresponding Luminance image. (Next, I hope to take them non-linear, and do an LRGB combination.)

When I set the RGB image as the reference image, and then try to apply an instance of the Linear Fit tool to the Lum image, I get an error that says `Incompatible Image Geometry'. But I've checked, and both images have the exact same pixel dimensions.

Has anyone else had this error occur? Does anyone know if there's something I'm missing?


- Marek

Recently, there have been some posts in a thread that concerns a possible PI image-acquisition module. I don't know if such a module will one day get written, but if so, I'm assuming it would basically replace programs like Maxim.

I currently use Maxim for image acquisition and autoguiding. I like Nebulosity and PHD, but AFAIK, I can't dither if I'm using those programs. So, I use Maxim since it allows me to dither.

If I could add a feature to Maxim - or, better yet, to a hoped-for PI acquisition module - it would be the ability to dither `every X frames'. That is to say, dithering between every single frame, or every fifth frame, or every twentieth frame, or whatever. My reason for this has to do with acquiring frames for HDR composition.

Let's say you're shooting M42. The long frames (say, 5 or 10 minutes each) should be dithered between each frame. That much is straightforward. But in order to avoid burning out the Trapezium, you'll have to shoot some very short frames, say only a few seconds each. If you dither between each of *those* frames, the time required for dithering will eat into the acquisition time significantly. But if you could dither only every 5 frames, or 10 frames, or whatever, that could save a lot of time, and would (I assume) still confer much of the benefit of dithering.

I wish Maxim had this feature. (Or, if it does, I wish I wasn't too dense to find it!  :tongue: )

- Marek

Image Processing Challenges / Re: Blotchy image
« on: 2013 January 25 14:09:34 »
Hi Julian,

That's great, I'm glad it helped! Last night I was working on some MMT denoising of an M33 image, and I think I ran into the `blotchiness' again. I was starting on the job of iterating the `Adaptive' settings when I had to break away and do other things. Hopefully I can get back to it soon. I'm glad to see that there seems to be a relatively straightforward way to deal with this particular class of denoising artifacts  :smiley:

- Marek

Image Processing Challenges / Re: Blotchy image
« on: 2013 January 22 18:09:41 »
I think I see the `blotchiness' to which you refer. I don't know if you used MMT (at the linear stage) for de-noising, but if so, it reminds me a little bit of the issues I had with `adaptive' settings in MMT. I'm not sure if it'll help, but here's an old blog post of mine, about this topic:

- Marek

This looks like a great tutorial! Thank you very much for sharing it!

- Marek

General / Re: Stretching in PI
« on: 2013 January 02 13:45:59 »
Hi Frank and Herbert,

Herbert is right, the Ctrl+A (or Cmd+A on OS X, I think), is the way to control the parameters of the `auto-stretch' (also called the Auto STF).

I happened to stumble into this trick myself, recently, and I think it's a little-appreciated way of controlling the stretch, but a powerful one.

- Marek

New Scripts and Modules / Re: InterChannelCurves
« on: 2012 December 27 22:47:20 »
It sounds like an interesting module, and one worth experimenting with. I'll wait for it to show up on OS X, perhaps as part of PI 1.8 at some point. (In fact, I'm too deep into a processing project to update to 1.8 at the moment... I think it'll be a little while before I try 1.8, probably after it gets re-released.)

- Marek

Off-topic / Re: True Colors
« on: 2012 December 23 21:58:30 »
I hope I'm not hijacking this thread, but since Vicent has mentioned his `Dynamic Range and Local Contrast' article, I thought I'd take this opportunity to re-ask a question about that article.

About a year ago, I was trying to follow along with Vicent's article, and I ran into some things I didn't understand. Vicent, if you happen to have the time to look at these questions, could you help me understand how the PixelMath expressions are applied to certain images?

Here are the detailed questions:

Thanks, and I understand if you don't have time... everyone is very busy this time of year, and especially with `Ripley' coming out!

- Marek

Gallery / Re: ngc1333 LRGB
« on: 2012 December 23 21:33:04 »
Hi Jeff,

I don't know if this will be useful, but here's an example of a situation in which I had to deal with some blotchiness in a denoising routine:

This might be `apples and oranges', though, since I was doing noise reduction at the linear stage with MMT, and it sounds like you might have been doing noise reduction after stretching, using ACDNR.

- Marek

Pages: 1 ... 3 4 [5] 6 7 ... 12