Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - james7

Pages: [1]
General / Batch RGB Combine?
« on: 2017 May 13 02:13:23 »
Is there any way to do a batch RGB combine within PixInsight without writing a custom script? Basically, what I need is the inverse of the BatchChannelExtraction but with the ability to specify input lists for each of the monochrome red, green, and blue files. The output would simply pair each of the channels to produce a set of RGB files.

Here is an example:

input file lists:
red_1, red_2, red_3, red_4...red_n
green_1, green_2, green_3, green_4...green_n
blue_1, blue_2, blue_3, blue_4...blue_n

rgb_1, rgb_2, rgb_3, rgb_4...rgb_n

The file naming isn't an issue, since I have tools to do batch renaming. I also don't need to specify any out of order pairing, just process the files in sequential order.

I have way too many files to do this manually and it doesn't seem possible (given my limited knowledge) to do this with either Pixelmath or with a file container.

If there is no relatively easy way to do this in PixInsight can anyone recommend another tool? I've already looked at PIPP and it seems to have the same limitation, you can specify a form of "batch" channel extraction but then there is no way to reverse that operation.

General / Star Alignment Summary?
« on: 2014 March 19 22:09:58 »
Is there any tool or script that can be run in PixInsight that will perform a star alignment on a stack of subframes and output a summary of the registration offsets? It would be great if there was a way to plot the +/- offsets for all of the subframes with a statistical summary. In fact, it would be nice just to get a comma separated list of the values that could then be processed in a spreadsheet or with some other tool. Having such a tool in PixInsight would make it possible to analyze your guiding or even measure things like periodic error. The data is being written to the console, there just needs to be a way to log the results in a convenient format so that it can be processed.

General / FWHM and Eccentricity, What Are Typical Values?
« on: 2014 February 26 18:08:19 »
I've been using the SubframeSelector and FWHMEccentricity scripts to sort my images and I'm wondering what other users are seeing when they run these scripts. It's all very well to select frames based upon the capabilities of your own equipment and technique, but I'm curious about what values other users are getting. To make any comparison at all we'll need to know what your capture scale is (arc seconds per pixel), whether you are using a monochrome or full-color sensor (assumed Bayer pattern), and the aperture, field coverage (focal length and sensor size), and type of your telescope. Reports should also include the stage in image development where the images were measured. Typically, that might be immediately after calibration and debayering (if required), but before stretching or any other digital development.

My greatest concern (or interest) is in eccentricity values as I very seldom see results that are better than the 0.42 limit that is supposed to indicate round, undistorted stars. My FWHM values seem reasonable, but my typical median eccentricity values are always above 0.45 (values in the 0.5 to 0.6 range being quite common). When viewed visually at 1:1 scale, the center-field star shapes run from slightly elliptical to what I'd call very close to perfectly round. The odd thing is that even on very short exposures (say, one second) my center-field eccentricity values still don't reach that 0.42 limit. I recognize that some of the poorer median values may be caused by field curvature and edge aberrations, but I struggle to get values below 0.5 even in the center of my field (on multiple telescopes, not just from one sample).

As an example, here is the summary from my Astro-Tech AT72ED (a non-flat-field, 72mm aperture, f/6 doublet ED telescope mated to an APS-C Sony NEX-5N camera). The capture scale is 2.3 arc seconds per pixel with a 4.8 micron pixel size and the diagonal field of coverage was 3.7 degrees. This data shows a lot of field curvature (as would be expected), but even in the center of the field the eccentricity is between 0.6 and 0.55. Over this same center the FWHM ranges from 4 to 6 arc seconds. Here is the data from the full frame (includes the "bad" edges):

FWHM 4.76 arc seconds
Eccentricity 0.725
StarSupport 4168
FWHMMeanDev 1.54
EccentricityMeanDev 0.121

The sample was taken from a stack of ten images that were exposed for 10 seconds each, there was no image calibration, the ten images were debayered (VNG), registered, and integrated with no normalization or pixel rejection. The measurements were then taken on the linear image using the SubframeSelector script (with only one image, the stacked result as input).

Is anyone willing to put forth there own measurements or add some comments on their own use of these scripts.

General / Integrating LARGE Numbers of Files
« on: 2013 December 08 14:18:12 »
I'm using a DSLR (actually a mirrorless, APS-C camera, from the Sony NEX series) and because of the light pollution I typically have to deal with I very often end up with hundreds of short exposures that need to be stacked into my final result. My problem is that I can't stack more than about 240 images before the PixInsight integration process ends with an error related to file i/o (apparently, but I'm thinking it may also have to do with RAM usage). I'm running under Mac OS X and have tried to increase the limit on open files and while that initially seemed to help I'm now gathering data sets that seem to exceed what I can get PixInsight to process (specifically, during integration). I'm going to continue to look into this problem (or a solution to same), but for the time being I'm wondering if there might be a recommended method to process these files in smaller-numbered groups and then combine those groups into a final result.

What I'm asking is that if I start with 500 calibrated and debayered images can I integrate these into smaller groups (say 100 images each) and then combine those into a final result (by taking those intermediates and integrating or combining those back into a single, final master). It's easy enough to just re-integrate the groups but I'm wondering if that comes anywhere near to producing the same results as integrating ALL of the images at once. I'm thinking that the pixel rejection that occurs during the integration may be the key (or the problem).

In any case, can anyone recommend a integration method (via groups, as I explained above) that would work the best, or come near -- theoretically or mathematically -- to doing the integration in one big set? Should I change the integration method or pixel rejection type when combining these groups? Would it be better (or make any difference) if I used many small groups rather than a few large groups (i.e. ten groups of 50 versus of two groups of 250 -- both producing a final master based upon the initial 500 subframes)?

I've already done some work in combining images using groups of masters, but I'm concerned that the results could be far from optimum and not nearly the equivalent of integrating all of the images at once.

General / ImageIntegration Data Cache Corrupted (Mac OS X)
« on: 2013 November 13 16:14:08 »
I think the data cache for ImageIntegration is corrupted on my system and I seem to be having problems running integrations because of this problem. I also can't access the ImageIntegration preferences to try and clear the cache, since PI just reports and error or hangs if I try to bring up the preferences for ImageIntegration. Is there any way to manual delete this cache? Is it a separate file and where is it located under Mac OS X?

I'm having this problem on BOTH installations of PixInsight (v1.7 and v1.8). Under v1.7 it reports a cache corruption error, but under v1.8 it just hangs when I try to view the preferences or attempt an integration.

I've been working on a few new sessions that I captured over the last week or two and I've noticed that the integration of my Bias and Dark Fields runs VERY slowly during the calibration phase of my batch processing. This seems to happen once I have more than a few Bias/DFs (more than four?) and the process seems to start at normal speed but then once the integration reaches about seven percent completion on the master Bias/DFs it slows dramatically. In fact, the jobs never complete because from that point on the processing is so slow that it might take an hour (or more) for even an additional one percent to get done.

This happens under both PixInsight v1.7 and v1.8 (Ripley) and I've tried running on both Mac OS X 10.6.8 and 10.7.5 (with different installations of PixInsight on each OS). Interestingly, this only happens on the Bias and DFs and I think it might have something to do with low signal levels OR the standard integration settings for Bias/DFs. My light fields integrate just fine (but without calibration) and I've been able to duplicate this problem even when not using the batch processing script (i.e. it also happens when using the standard imageintegration process, as long as you select the typical/recommended normalization and weight settings for Bias/DFs).

I've used Mac OS X's Activity Monitor to sample PixInsight when it is in this slowdown and it appears that PixInsight is in some kind of deadlock condition where it is just waiting on a series of semaphores. It's probably a "race" condition as some work gets done but it's done VERY slowly. The OS has plenty of free memory and PixInsight is spinning at 100% on one core of my four core Mac Pro system. This slowdown can happen with as few as sixteen files and my system has 10GB DRAM with PixInsight being the only active user process.

The "kicker" is that this seems to only happen with Nikon NEF/RAW files, since I've been successfully using a Sony APS-C camera for many months now without similar issues. Also, these very same jobs/files run fine under MS Windows 7, so it seems to be a Mac OS X related problem.

As yet I haven't done enough testing to determine whether this issue is related to the integration settings for Bias/DFs OR the differences between the signal levels in Bias/DF and LFs. It may be a combination of both, that and the Nikon NEF/RAW files themselves since I'm pretty sure that my Sony/RAW files are still working (I hope, since I've been working to try and finish my latest captures which were done exclusively with my Nikon DSLR).

Tonight I'm going to try a fairly long job with just my Sony RAW files (Bias/DF/LF) and if that works then it will point directly to some issue with the Nikon Bias and DFs. My Nikon is a D5100 and my Sony is an NEX-5N (they both have about the same sensor resolution, in fact I think they use the same sensor -- made by Sony).

If I can't resolve this issue within the next few days (by myself) then I may need to send you some copies of my Nikon Bias files so that you can try to duplicate this problem yourself. It doesn't appear to be a Ripley issue, since I went back to v1.7 and the same problem happens there.

The subject pretty much says it all. Under Mac OS X the realtime preview for the ATrousWaveletTransform is VERY slow under Ripley RC7 (and earlier, this is not isolated to just the RC7 release).

There appear to be two new behaviors with the preview process under Ripley when using the ATrousWaveletTransform. First, you see a dialog for several seconds that says a cache is being computed (this only happens on the first attempt at showing a preview). Then, after the cache dialog disappears there is a very long delay (twenty seconds is not unusual) during which the progress indicator on the preview window spins after which the preview finally appears. This latter delay happens each time you click the preview button in the ATrousWaveletTransform panel. It even happens when you've selected a small preview area within a larger image. Under PixInsight v1.7 I never has such long delays when using the preview function on the ATrousWaveletTransform. It almost seems as if the realtime preview is actually running the full ATrousWaveletTransform process (i.e. the preview isn't any faster than running the entire ATrousWaveletTransform process on the original image).

Mac OS X 10.6.8 (but also happens under Mac OS X 10.7)
Mac Pro (2 x 2.66GHz Xeon)

Ripley RC6 and RC7 (and possibly other Ripley releases)

Bug Reports / 1.8RC5 and Mac OS X 10.6.8 (Snow Leopard)?
« on: 2013 March 29 19:46:10 »
RC5 doesn't seem to run on Mac OS 10.6.8 (crash on launch, first generation Mac Pro with twin 2.66GHz Xeon processors and 10GB DRAM). Here is the relevant section from the  Mac OS X crash report:

Dyld Error Message:
  Symbol not found: __dispatch_source_type_vm
  Referenced from: /Applications/
  Expected in: /usr/lib/libSystem.B.dylib
 in /Applications/

This looks like it may be the same problem that was documented in the RC4 release and here is what you announced with that candidate (on March 12):

"This version is not compatible with Mac OS X 10.6 (Snow Leopard). This is due to a problem with the WebKit library, which PI uses to implement its document browsers. We'll release a special version for OS X 10.6 as soon as possible."

Should I assume that this is still a problem with RC5?

Pages: [1]