Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - james7

Pages: [1] 2 3 4
Wish List / Re: modification to Image Annotation Script
« on: 2019 December 24 08:14:39 »

However, you can currently create individual layers that are transparent and then overlay those layers in a program like Photoshop. Once you do that you can offset or move each individual layer.
AnnotateImage can generate a SVG overlay that can be edited using a vector image program (i.e. Inkscape) and then merged with the original image.
Thanks, I'll give that a try. However, I'd still prefer being able to edit a custom catalog that could be output (optionally) from the ImageAnnotation tool. As it is, I've just created custom catalogs that use data from the object text file that can already be output by the script. But, I don't think I'd want to do that very often because there is a fair amount of manual work to create the catalog.

Wish List / Re: modification to Image Annotation Script
« on: 2019 December 23 21:13:04 »
Yes, a way to offset the labels would be very useful. However, wouldn't that have to operate on each individual object? I'm not sure how that could be done. I guess what Juan is suggesting is that the annotation script itself should use the font metrics to detect and prevent any conflicts. But, that could be a little difficult if there are multiple labels that overlay each other, you could certainly offset those automatically using the font metrics, but would that arrangement really make any sense in the final annotation (it could be difficult to tell which label goes with each object).

However, you can currently create individual layers that are transparent and then overlay those layers in a program like Photoshop. Once you do that you can offset or move each individual layer.

That said, it would also be nice if you could create custom catalogs directly as output from the annotation script. Currently you can output all of the annotation objects to a simple text file but the format isn't the same as needed to create a custom catalog. The custom catalog has a format as defined below (by Andres.Pozo):
The custom catalog format is a text file where each line is a record with fields separated by tabs. The first line must be a header that defines the columns of each record. The available columns are these:

NAME: (Optional) Screen name of the object (i.e. NGC4565)
RA: (Mandatory) Right ascension in degrees (not hours) (i.e. 225.211545)
DEC: (Mandatory) Declination in degrees (i.e. -53.25664)
DIAMETER: (Optional) Diameter of the object in arcminutes. If this column doesn't exist or the value is 0, the object is considered as a point.
MAGNITUDE: (Optional) Magnitude of the object.
So, if you can output a text file that has all of the objects why can't you output a custom catalog that is in the same format as needed by the annotation script? With that custom catalog you could edit/change whatever you wanted (including the removal of unwanted objects or addition of others). Worse case, you could change the RA and DEC values to offset the name (and use a separate transparent overlay to output just the names -- without the markers). However, it would be nice if the NAME field in the above format allowed leading space characters, since that padding would allow individual horizontal offsets to the names. Right now it looks like the custom catalog parsing just ignores leading spaces in the names, which seems unnecessary since tabs are used as field separators (the parser should allowing all white space, except tabs, to be taken as literal).

Also, it looks like the MAGNITUDE field only accepts numbers which precludes doing something like: mag=11.2. I don't know maybe this limitation is in the annotation mechanism itself, but it would be useful to have MAGNITUDE be a simple text field, or just redefine that field -- or add another -- for optional text. Right now it looks like the custom catalog doesn't do anything with the MAGNITUDE field except allow it to be printed in one of the selected field positions, there is no magnitude limit or test option in the custom catalog parameters (just the marker and label options).

I guess best case would be to keep the MAGNITUDE as it is and then add a range test for that field. Then add a custom text field that could be positioned as done with the other labels. That way you could add something like "mag=11.2" (or whatever) along with the object name. So, the new custom catalog format would look something like the following:

NAME: (Optional) Screen name of the object (i.e. NGC4565)
RA: (Mandatory) Right ascension in degrees (not hours) (i.e. 225.211545)
DEC: (Mandatory) Declination in degrees (i.e. -53.25664)
DIAMETER: (Optional) Diameter of the object in arcminutes. If this column doesn't exist or the value is 0, the object is considered as a point.
MAGNITUDE: (Optional, must be numeric value) Magnitude of the object.
TEXT: (Optional) Custom text defined by the user.

General / Re: Batch RGB Combine?
« on: 2017 May 13 19:20:54 »
Well, it looks like MATLAB is out since they won't allow me to download the free trial and I'm not going to pay a minimum of $200 just to see whether it might work.

So, it looks like I might have to write my own software to do this seemingly simple task.

General / Re: Batch RGB Combine?
« on: 2017 May 13 18:47:45 »
These are from a series of 1000 frame mono videos of the moon that have been registered by PI's FFTRegistration script and now I want to combine them into a set of RGB files so that they can be processed (in color) with AutoStakkert! They need to be combined into RGB so that I can use AutoStakkert!'s MAP and RGB registration options. I need RGB images since it appears if you try to run AutoStakkert! on the mono files then you can't get the channels to align properly after the fact, I think because the MAP processing is introducing distortions in the final mono images. However, I'm hoping that if I submit RGB files to AutoStakkert! it will maintain and even improve the registration between the three color channels.

In any case, I think I've found software that will do what I need, MATLAB (kind of a Pixelmath on steroids since it allows operations on file-based images). Unfortunately, MATLAB is kind of expensive but they offer a free trial period so I think I'll be able to use that to verify whether my process will work to produce high pixel count, high resolution RGB or narrow-band color images of the moon.

I also tried to use Photoshop's video Timeline feature but that part of Photoshop seems very buggy and feature limited and I couldn't find any way to combine channels from two different videos or timelines. It might be possible to do these operations in Photoshop using its batch processing feature and a custom action to combine the channels, but the batch processing in Photoshop seems to be limited to a single file path or directory and I need to work on three sets of files at one time.

General / Batch RGB Combine?
« on: 2017 May 13 02:13:23 »
Is there any way to do a batch RGB combine within PixInsight without writing a custom script? Basically, what I need is the inverse of the BatchChannelExtraction but with the ability to specify input lists for each of the monochrome red, green, and blue files. The output would simply pair each of the channels to produce a set of RGB files.

Here is an example:

input file lists:
red_1, red_2, red_3, red_4...red_n
green_1, green_2, green_3, green_4...green_n
blue_1, blue_2, blue_3, blue_4...blue_n

rgb_1, rgb_2, rgb_3, rgb_4...rgb_n

The file naming isn't an issue, since I have tools to do batch renaming. I also don't need to specify any out of order pairing, just process the files in sequential order.

I have way too many files to do this manually and it doesn't seem possible (given my limited knowledge) to do this with either Pixelmath or with a file container.

If there is no relatively easy way to do this in PixInsight can anyone recommend another tool? I've already looked at PIPP and it seems to have the same limitation, you can specify a form of "batch" channel extraction but then there is no way to reverse that operation.

Thanks for contributing the video.

New Scripts and Modules / Re: ImageSolver with SurfaceSplines
« on: 2015 July 01 20:57:47 »
I'll give these a try and thanks for your continued effort to support a great set of PI scripts.

General / Re: FWHMEccentricity maps
« on: 2015 May 11 20:13:16 »
It would help on the FWHM to know your pixel scale in arc seconds. The only thing you can really tell from the map you provided (in pixels, not arc seconds) is that you are getting close to being undersampled, meaning that the stars in your image are only being sampled over 2 pixels which is cutting it kind of close for really accurate FWHM measurements. Moreover, if you are imaging with a one-shot-color camera (with a Bayer pattern) then the sampling is effectively even worse (or lower).

In any case, consider the case where you are imaging at 2 arc seconds per pixel, that would give you a FWHM of about 4 arc seconds which is certainly fine but not great. However, if you were imaging at 4 arc seconds per pixel you'd have a FWHM of around 8 arc seconds which would be pretty bad (indicating, perhaps, a problem with focus). You also need to consider what happens if you have bad seeing or poor focus, in that case the FWHM (in arc seconds) will appear poor but the eccentricity may look surprisingly good (i.e. the stars will be "big" and round).

If I had to guess I'd say that you are okay (looking at the maps, since they seem to show fairly good uniformity across the field), but to be certain you really need to consider the FWHM in arc seconds, not just pixels.

If you want to use the BatchPreprocessing script you need at least bias and dark frames. In fact, the main reason for the existance of the BatchPreprocessing script is to do image calibration, so unless you have bias, dark, and flat frames there isn't too much point to using the script.

So, you'll have to do a manual debayer operation (BatchDeBayer script), followed by StarAlignment (under ImageRegistration), and then ImageIntegration. For a quick practice run with three files you can probably just use the defaults for each process and it should take only a few minutes to perform the entire set of operations.

My images haven't been graded on any scale other that an overall accept and reject. Further, on the lunar shots the seeing conditions vary so much over the full field of view that it would be VERY difficult to weight the quality of the entire frame by visual inspection alone. I'd pretty much have to divide the image into a large matrix and grade each area separately and then sum the individual elements (maybe scoring a one for an accepted zone and a zero for a reject). The other issue is that when working with such large fields and at such high magnifications I've yet to find an arrangement that will produce a flat field (in resolution) over the entire APS-C frame. It wouldn't be that much of an issue to crop only to the "good" part of the field but unfortunately when tracking the moon I've never gotten the image to remain that well centered over the four or five minutes of capture times that I typically use. Thus, each crop would have to be a little different if I wanted to maintain as much of the field as possible.

I could give such a process a try, but I suspect that I could only do a few frames that way since it could probably take many hours to do that type of grading on the full data sets that I have used in the past.

If it takes me a week or two to find the time to do maybe five to ten total frames would that be good enough for your purposes?

Okay, thanks for the update.

I have lunar images and some of Jupiter and even some stellar/DSOs that I've graded by "hand." The reason I have these is that I sometimes do high-resolution work a little differently than most. I capture using an APS-C camera in STILL mode (not video) and take anywhere from 48 to 64 individual, sixteen-megapixel stills and then visually inspect those to find the best given the seeing and other factors. I generally concentrate only on a relatively small area of interest (craterlet or rille on the moon, cloud detail in Jupiter, double star, etc.) and typically reject anywhere from one third to one half of the candidates (depending upon the seeing, which generally must be pretty good or why else bother?). On the lunar images, if it is obvious that only a small portion of the image is sharp then I will reject the sub even if it looks good in the primary area of interest.

After that process I run the images though either Registax or AutoStakkert!2 and then sharpen in either Registax (wavelets) or PixInsight (deconvolution).

Honestly, however, for planetary images video is best because you obviously need hundreds if not thousands of "good" frames to get the maximum detail possible and I would never be able to grade that many images by simple visual inspection (other than to do a quick, gross inspection to reject the very worst images). Thus, when capturing with my dedicated planetary camera (video) I almost always allow Registrax or AutoStakkert! to do the automatic selection.

However, for lunar images I think the situation can be a little different since I can cover much larger fields with an APS-C camera than I can with the typical one or two megapixel planetary camera. That's really the major reason why I've continued with this technique (i.e. stills rather that video), because I like the wide field of coverage I can get with the APS-C sensor when I image the moon (rather than having to do a multi-framing montage to cover that same area with a small-sensor camera).

I'd be willing to provide you with some sets of images on the moon. However, unless your tool can easily handle four megapixel or larger images then my workflow probably wouldn't benefit much from what I can currently do with either Registax or AutoStakkert!. The problem with those tools is that they can't handle large images which to some degree has limited what I can do with my APS-C sized frames.

Let me know if I can help in any way, as I'd absolutely love to have a tool that could handle the grading and integration of really large (greater than 4 megapixel) lunar images.

Here are some links to two lunar shots that I've done using stills from my APS-C camera (single framing, not a montage, these are large images so make certain you either zoom or look at the full-size version, not just the default view in Flickr):


Is there anything new on this tool?

If you are still looking for sample images I can provide some for several different targets.

Am I correct in assuming that the only way you can currently use the Bayer Drizzle is with the BatchPreprocessing script? That is, there is no tool specifically to produce a set of files that have been reconstructed into RGB images using the Bayer Drizzle technique.

Announcements / Re: PixInsight Ripley Released
« on: 2014 June 24 15:49:58 »
Thanks to everyone on the PixInsight Development Team for your continued support of Mac OS X 10.6/10.7!

Pages: [1] 2 3 4