Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - marekc

Pages: [1]
Once again I find myself running into a problem with LinearFit, and I'm curious to see if anyone might know what's happening.

I am trying to make my first two-frame mosaic. I shot the R, G, and B frames for each panel last weekend, when the Moon was pretty bright, so all of the images have fairly heavy gradients. I've worked DBE pretty hard, trying to flatten everything as much as possible. I think I have removed the gradients reasonably well.

Whether I use StarAlignment or Gradient Merge Mosaic, the results are poor. Both of those methods are able to align the panels and fit them together, but one panel looks reasonably smooth, and the other looks very noisy and grainy, in the final mosaic. It's much worse than just a visible seam. This is despite the fact that the background levels of both panels are pretty similar. (Also, I've used Frame Adaptation at all of the points where Steve Allan recommends it in his `Supermosaic' video.)

It occurred to me that since the images are still linear, maybe I should try to Linear Fit them, prior to building the mosaic. Problem is, I keep running into the `Incompatible Image Geometry' error, no matter what I do.

I have tried cropping one panel so it's the same size as the other... no joy.
I have tried Linear Fit on RGB images and on greyscale images of one color channel... no joy. (In other words, I'm avoiding the problem that I caused for myself the last time I posted about this:

No matter what I do, I get `Incompatible Image Geometry'.

As usual, I'm probably missing something fairly basic. Does LinearFit only work on images that show the same part of the sky? Did I create an impossible situation for myself by trying to shoot a test mosaic under a moony sky?

- Marek

Gallery / M33 from September 2012, through an 80mm refractor
« on: 2013 February 16 13:14:23 »
Here's an image of M33, made from unbinned RGB data. I shot the data last fall, from a dark site in central California. I used my Orion ED80 semi-apo refractor, on a Losmandy G-11, with an SBIG ST-8300 mono camera. Subs were 15 minutes each, if I recall correctly. I wish I could remember how much total integration time went into the image, but I shot it over a few nights.

I'd try to embed the image, but it's taking forever (thanks to Flickr's changes to the way they handle their image links), and I'm burning daylight - I've got things to do. Here's a link to my blog post (which includes the image), and a link to the image on Flickr:

The hardest thing about this was Deconvolution. The magic solution was contained in Warren/RBA's video! I'm not going to give away the secret, but suffice it to say that their advice on how to get the PSF rescued me from months of frustration over Deconv!

- Marek

I'm trying to use the Linear Fit tool, but I'm having trouble with it. It's giving me a strange error message.

I have a linear RGB image, and I'm trying to use Linear Fit on a corresponding Luminance image. (Next, I hope to take them non-linear, and do an LRGB combination.)

When I set the RGB image as the reference image, and then try to apply an instance of the Linear Fit tool to the Lum image, I get an error that says `Incompatible Image Geometry'. But I've checked, and both images have the exact same pixel dimensions.

Has anyone else had this error occur? Does anyone know if there's something I'm missing?


- Marek

Recently, there have been some posts in a thread that concerns a possible PI image-acquisition module. I don't know if such a module will one day get written, but if so, I'm assuming it would basically replace programs like Maxim.

I currently use Maxim for image acquisition and autoguiding. I like Nebulosity and PHD, but AFAIK, I can't dither if I'm using those programs. So, I use Maxim since it allows me to dither.

If I could add a feature to Maxim - or, better yet, to a hoped-for PI acquisition module - it would be the ability to dither `every X frames'. That is to say, dithering between every single frame, or every fifth frame, or every twentieth frame, or whatever. My reason for this has to do with acquiring frames for HDR composition.

Let's say you're shooting M42. The long frames (say, 5 or 10 minutes each) should be dithered between each frame. That much is straightforward. But in order to avoid burning out the Trapezium, you'll have to shoot some very short frames, say only a few seconds each. If you dither between each of *those* frames, the time required for dithering will eat into the acquisition time significantly. But if you could dither only every 5 frames, or 10 frames, or whatever, that could save a lot of time, and would (I assume) still confer much of the benefit of dithering.

I wish Maxim had this feature. (Or, if it does, I wish I wasn't too dense to find it!  :tongue: )

- Marek

General / Notifications - are they possible while working outside PI?
« on: 2012 September 29 12:57:05 »
I often have a PI project open while I'm doing other work. I like to `chip away' at the PI project in between doing other things. I work on a MacBook Pro, using Spaces and Expose (on my Snow Leopard machine, or RealSpaces software on a Lion machine).

Does anyone know if I could do the following? : If I have a process or script running in PI, I'd like to get notified with an audio alert and/or with something like Growl, when the process finishes. For example, if I'm using bitli's VaryParams script, it would be nice to see a little Growl notification when it's done. Same if I'm just running a single Deconv or MMT-denoising run in PI. That way, I could be working in another Space, using some other program, and I'd know when my process or VaryParams run is done.

- Marek

Okay everybody, I've got a bit of a `thinker' for you. I suppose this is mostly directed at the PI authors and gurus, like Juan, Vicent, etc..., but I'd be curious to hear anyone's opinion.

Many of us have commented on the difficulty we commonly have when choosing parameters for PI's processes, such as Deconvolution, MMT denoising, StarMasking, etc... We are often somewhat overwhelmed by the number of things to adjust. I'm always very grateful to read, say, one of Juan's tutorials on noise reduction or deconvolution, in part because he gives us useful settings to try as a starting point.

It's finally starting to dawn on me that the VaryParams script could be pretty useful here. I've known about it, and I've had it saved on my computer for a while, but I've finally started trying to use it to help me figure out some Deconvolution settings for an image. Here's what hit me the other day: It's like starting an imaging run! Setting VaryParams to explore a parameter is kind of like setting my telescope/mount/camera on a run of light frames. I go off and do other things, such as observing with binoculars (in the case of an imaging run) or doing something else on the computer (in the case of VaryParams). Okay, so far so good.

Here's what I'm wondering... for a given process, such as Deconvolution, how much does the ORDER of our explorations matter? For example, in Deconv, the `Global Dark' setting matters a lot. It wasn't until I read Juan's tutorial on Deconvolution ( that I found out it should be set to a very low value, such as 0.005. If I'm trying to use VaryParams to explore the parameter space within Deconvolution, I might start my exploration by doing several Deconv runs with different values of Global Dark. Once I find the best value, I might go on to some other parameter.

But here's what nags at me: What if the parameters affect each other so much that we can't take this `sequential exploration' approach? What if, having found a Parameter X that looks best, the optimal Parameter Y (or whatever) would actually necessitate going back to Parameter X and re-tweaking it? In such a case, it seems to me that our exploration of `Pixinsight parameter space' is fairly close to hopeless, without some massively automated AI system doing it for us.

So, my question boils down to this: In most of PIs major modules (Deconv, MMT denoising, etc...), will we get good results if we iterate parameter-by-parameter, sequentially? And more to the point, *what order of parameters should we use*?

I think that if the PI Team could give us a simple list of the `parameter order' for each processing module, these lists would be some very powerful information for PI users. We'd know what order to use VaryParams, and then processing in PI becomes like shooting a celestial object with a CCD camera and autoguider... just set it up and let it go. Forget that manual guiding with the illuminated cross-hairs, go take a nap or look through a visual scope or do something that you actually enjoy. In the case of PI, forget staring at PI all day, just check on it from time to time to see if you need to set VaryParams on some new task, and go live your life. True, processing wouldn't *really* become a brainless, automated task. A reasonably deep understanding of the methodology of imaging and the workings of PI will still be necessary, as well as an aesthetic sense. But this just seems to be a way to harness the power of a computer to do our iteration for us, instead of staring dumbly at the PI screen, as I do.

So, to make this work... what is the preferred order of `parameter exploration' for each of PIs modules? Hmm? If we have this, we can (sort of) all become PI wizards like the gurus we admire so much! (There's some tongue-in-cheek exaggeration there, of course.)

Wait a second! Now I know what image processing (particularly with PI) is like! This is a problem in cryptanalysis! I can finally put my finger on what processing feels like. The correct set of parameters to apply in PI, which would yield the best possible image (according to my aesthetic sense), is the plaintext. The image data are the ciphertext. PI is the Engima Machine. It has `rotors', `reflectors', plugboard settings, a day key, and the like. VaryParams is like one of Turing's `bombes', or his Colossus. It has the power to help me explore parameter space and make progress on the decryption, but it isn't strong enough to do the whole process by brute force. Ideally, different parts of the decryption process could be isolated from each other, so it could be done in a piecewise way. To the extent that the different parts of the decryption *can't* be isolated from each other, the task becomes more hopeless.

My dream of the `parameter exploration order lists' would be like telling Marian Rejewski or Alan Turing what parts of the decryption process could be isolated from which other parts, thus allowing their machines to do as much of the work as possible.

- Marek

- Marek Cichanski

This is a pretty small bit of progress, but I recently made a bit of headway with MMT-based noise reduction. I generally find myself intimidated by the settings and sliders in MMT, but I finally learned a little bit about the `Adaptive' sliders. I was able to use them to eliminate some dark blobs that remained after an otherwise-decent-looking (to me) noise reduction.

I've described it here:

- Marek

Hi Everyone,

This is a `newbie questions' post, but not for the usual reasons. I'm not posting a question about PI processing, per se, but about how to use this forum. I think my questions will show just how much I'm not a programmer.  :-[

For everything described below, I use a 2009-era MacBook Pro, running Snow Leopard, and I browse the web with Safari. My descriptions of various posts are based on what happens when I'm logged in to the PI Forum.

I'd never tried including an image in one of my posts until yesterday. I made a test post that had an image in it. The image is hosted in my Dropbox public folder. I added the image by clicking the `Insert Image' button in the `Start new topic' message-composition window. When I did that, it put some markup tags in my message. I then copy-pasted the URL of my Dropbox-hosted image between the tags. That caused my image to go into my post, at full size (as near as I can tell.) Here's my post:

I'm glad I was able to add an image to my post, but I've seen other people adding images and files in different ways. I'm curious to understand how they did it. Here are some examples:

1) A post by Cleon_Wells: It has two images. Each image is represented by a small version of the image, and by a text link. The small version of each image is clickable, and opens up in a new Safari window. Each text link is clickable, too, (it has a paperclip symbol next to it), and it causes a copy of the image to download into my `Downloads' folder, where my computer then auto-opens with Preview.

2) A post by DanielF: An image is present in the post, and it appears about as wide as the post's column of text. The image is clickable, and it leads to the the image in the author's Flickr photostream.

3) A post by `Tom OD' that has a .doc file attached. A clickable text link is present in his post, but no image, presumably since the linked file is a word-processing-type file. The link goes to a Word document.

If anyone can help me understand the different ways in which these authors inserted images, and linked files to their posts, I'd be grateful. I'd like to be a bit more of a `master' when it comes to doing this sort of thing. Since I'm not a programmer, all of this stuff with HTML, PHP, SMF forums, and so on is a bit mysterious to me.


- Marek

Off-topic / Test post to try and insert an image
« on: 2012 July 13 22:37:44 »
This is a test post, to see if I can insert an image.

This is the Leo Triplet, which I shot with an 80mm refractor, and an ST-8300M camera, through a Luminance filter:

Okay, judging from the Preview of my post, the image was inserted correctly. It appears to have been inserted at full size. I'll go back to the Simple Machines Forum wiki, and see if there's any way I can change the size of the image *as it appears in the post*, while still making the image in the post link to the full-sized image.

The objects in the image were annotated with the Pixinsight `Annotation' script. More information can be found in my imaging blog:

- Marek

I've suddenly started having a GUI bug in the Process Explorer window of PI. When I click on the Process Explorer tab, the Process Explorer palette comes out, as usual, but the right half of the palette is covered with the Documentation Browser. I can still click on Process icons, and the Process instances start up just like I want, but I can't see the names of the Processes very well. I've tried everything I can think of, but I can't get the partial Doc Browser to go away. I can't properly read the Documentation, either, since the Doc Browser `cover' is so narrow.

I'm running PI 1.7 on a 2009-era Mac Book Pro, OS X `Snow Leopard' (10.6.8). Several updates of the documentation have recently installed themselves, but this doesn't seem to have started after one of these updates. It just seems to have started by itself.

Hi Everyone,

I'm trying to make an `overlay map' to put on an image of M31. I shot M31, and processed it in PI, and I'm pleased with the result. I would like to overlay an isophote map published by some professional astronomers. The isophote map is basically a drawing consisting of a bunch of lines.

I have the isophote map as a PDF, and I can re-save it in pretty much any format required, using Photoshop. I can even use PS to do some basic scaling, rotation, etc... I could use PS to invert it, and remove the background, so that it would just consist of glowing white lines on a transparent background - perfect for overlaying onto my image. But, I need to be able to pick corresponding `control points' on each image, and distort the overlay to match my M31 image.

Photoshop has a `Warp' tool, which sort of works, but doesn't work as well as I'd like. The isophote map has a number of stars marked on it, and I'd like to get those marked stars to `drop onto' my stars with a reasonably high degree of accuracy.

All of this reminds me a lot of Dynamic Alignment. I see that I can choose an `inverted' registration point, but that isn't doing what I want. My main problem is that the control points keep `snapping' to certain features in the target image. I try to put the target-image version of the control point on a particular feature, but Dynamic Aligment keeps saying `no, no, I've detected a feature over HERE! This is where I want to put the control point!'

This makes perfect sense, of course, since Dynamic Aligment is very good at finding stars, which is what it's designed to do. But I wish I could invoke a `manual override', so that I could set the `target' version of the control point wherever I want. Does anyone know if this is possible?

I've tried to find other software to do this kind of thing, without much success. Panorama-stitchers like Hugin don't seem to do what I need. Photoshop's `Warp' tool is too much of a blunt instrument. Dynamic Alignment seems to have the power that I need, but it seems to be designed to work with stars, and I just can't place the points where I need them, unless there's something I'm missing.


- Marek Cichanski

Hi Everyone,

I think I may be a Pixinsight newbie forever, but I try to make progress! I was thinking about how I might record some of what I've learned about PI... should I make notes in a paper notebook, should I use an `everything bucket' like Evernote, should I make text files... etc...

I was watching one of Harry Page's videos when it occurred to me... I have Camtasia, too! If I think I've got something figured out, I could just make a video of it.

So, I made a short video of a PI trick:

I hope that link works. This trick is REALLY basic - I'll bet that most of you won't find this particularly new. Basically, I was thinking `I don't really understand anything about the History Explorer. I wonder if I could take a processing step from an image's History Explorer and apply it to another image?' I think I can.

I don't know if I'm right, but FWIW, I made a video and put it on YouTube, mostly to see if I could. I'm no Harry Page, but it's a way of recording some of what I've learned (or think I've learned) about PI. Maybe I'll make more videos, who knows.

Sorry if I talk too fast, I was basically thinking out loud.

Hi Everyone,

I just did a search on the phrase `no correlation', to see if anyone has been having the same problem that I have. This has been a pretty big problem for me during the last few weeks.

I'm trying to calibrate monochrome CCD images that I shot with an Orion Parsec 8300M camera. I shot lights, flats, darks, and biases. I have prepared the master bias, dark, and flat frames according to the standard Pixinsight tutorial:

After making these master frames, I've taken a quick look at each of them with an auto STF, and they look like images I've seen in books about CCD imaging, showing what the master frames `should' look like (e.g. Berry and Burnell, Wodaski).

I am applying these master frames to my light frames, using the Image Calibration module (The light frames also look normal with an auto STF).

However, when I try to calibrate my light frames, two strange things happen:

1) The processing console tells me that there is `no correlation' between my dark frame and my target frame.

2) I get a `calibrated' light frame that looks very strange... when I do an auto STF on it, it is perfectly gray, except for the brighter pixels, which are clipped to white. I call it a `binarized-looking' appearance. There is no useful data in the image. It's hardly even an image any more.

In the Image Calibration module, I've tried various combinations of checking and unchecking the `calibrate' and `optimize' boxes, in all of the places where they appear, without any luck.

I'm wondering if I'm just making a mistake, or if this might possibly be a bug? My darks, flats, biases, and lights were shot with the same camera, at the same temperature, and the same binning. They were saved at the same bit depth. (I get this problem both when I acquire everything at 16-bit and when I acquire everything at 32-bit depth).

There have been a couple of  recent threads that have mentioned this issue:

I'm using eng (x86), running under Mac OS X 10.6 (Snow Leopard) on a MacBook Pro.

Thanks, hopefully this is just some simple mistake that I'm making. I hope to be able to calibrate frames again sometime soon.

- Marek Cichanski

Off-topic / S & T Interview that mentions Pixinsight (+ IRAF !)
« on: 2011 February 08 14:09:31 »
Hi Everyone,

There's an article (on the Sky and Telescope website) about the recent `Hidden Treasures' competition held by ESO:

I couldn't help noticing that Joe DePasquale (the 4th-place finisher), who is mentioned in the article, says that he uses Pixinsight! Nice bit of publicity for Juan and Co.  :D   As a perpetual PI Newbie, this was one more bit of encouragement to keep `fighting the good fight' as I learn PI.

I found one of Mr. DePasquale's comments very interesting... he said that he calibrated the data with IRAF, and then used PI.

Yes, Pixinsight is very challenging, but IRAF seems to be a much scratchier `hair shirt' to put on! I once attempted to learn how to use it, but I haven't progressed very far at all. It's a UNIX-based software package that the professional astronomers use for their processing and analysis, and there's not much reason for amateurs to use it. I doubt that it does anything that `our' software (PI, PS, Maxim, AIP4WIN, etc) can't do, at least in terms of making asthetically pleasing images. My only reason for wanting to `know IRAF' was purely for the satisfaction of having mastered such a difficult beast... but I haven't gotten very far  :-[  I really don't know UNIX, and so it would be a great deal of work for me to learn IRAF, and it probably wouldn't `buy' me anything.

But... I would be very curious to know why Joe DePasquale used IRAF for his calibration. I wonder why he didn't use PI? It seems likely that the files he used were FITS files (though I don't know this), so it would seem to me that PI could `eat' pretty much anything that ESO would have in their archives.

I don't know if he reads the PI Forum, but if so, I'd be curious to learn more about his IRAF/Pixinsight workflow someday.

- Marek Cichanski

General / My first post: A question about printing
« on: 2011 January 30 19:48:23 »
Hi Everyone,

This is my first post to the Pixinsight Forum. I've been using PI for a few months now, and I'm enjoying the challenge of trying to get the most out of my data!

I'm fairly new to astro-imaging, I'm still not sure that I've made anything yet that's worth showing to the world... but hopefully soon!

I live in the San Francisco Bay area of the U.S., so naturally I've been greatly inspired by RBA, and the other talented and diligent imagers in this part of the world.

My equipment is fairly rudimentary, it's basically all from `Orion', the U.S. importer of astronomy gear. I'm using their `ED80' f/7.5 refractor, their Parsec 8300 monochrome camera, and their LRGB filters. I'm running Pixinsight 32-bit on a MacBook Pro using Mac OS X 10.6 (`Snow Leopard'). I just calibrated this laptop's display with a DataColor Spyder 3 Pro colorimeter.

My first question for the forum is related to *printing*. Although I don't feel like I've made any world-beating images yet, I thought I'd try printing a few images in small sizes. I'm using an Epson Stylus C88+ printer - it's a rather basic small desktop printer, the kind that they give away at the Apple Store when you buy a computer.

Here's my problem: When I try to print from PI, the Processing Console says:

Adobe PDF 9.0
Sending data to printer...
<* failed *>

I also get a pop-up window that says `Unexpected error during printer initialization'.

I am able to print with this printer using Photoshop CS4, Acrobat CS4 (=9.4.1), and other applications, like my LaTeX front end (TeXShop), OpenOffice, and TextEdit.

I was wondering if anyone else has encountered a `printer initialization' error like this?

I suppose it's to early to ask this next question, but... if it ends up not being possible to use this printer with PI, I wonder what would be the best workflow for producing a file in PI, and then opening in PS and printing from there? I suppose that's best left as a case of `we'll cross that bridge when we come to it'.

Thanks for the great software and for the very helpful forum!

- Marek Cichanski
San Francisco Bay area, U.S.A.

Pages: [1]