Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - rgbtxus

Pages: [1]
I collected one set of Ha frames one night and another set of 15 min frames the next night.  I have flats for each night.  If I register and calibrate each set (registering them all against frame 1 of set 1) and then toss all the resulting frames into ImageIntegration, does it just figure out how to combine them appropriately?  Or, do I do something like ImageIntegration on set one, run LinearFit over each frame in set two using the integrated set one frame as the reference and then reintegrate all the frames (calibrates & registered from set one, calibrated, registered, fit from set 2)?

This crash is repeatable on my machine with my data with an intervening reboot.
After the reboot I tried integrating 4 frames which worked fine.
By eye and memory it died initially after reading about 70-80% of the files.
I then retried it using the last 200 files (includes the file it was reading when it crashed)
That worked properly
Out of memory or stack kind of issue?
Let me know if you need anymore information
I have attached a document with the OSX crash dump info for both crashes

My parameters for ImageIntegration were as follows:

Combination: Average
Normalization: No Normalization
Weights: Don’t Care (all weights =1)
Scale estimator: Median Absolute Deviation from the median (MAD)
only integrate checked
Pixel Rejection (1):
Winsorized Sigma Clipping
No Normalization
Check all boxes
Pixel Rejection (II):
Sigma Low 4.0
Sigma High 3.0
Range Low 0.0
Range High 0.98

I think the title says it all.  There are various references in the forum to a presumably best way to create these frames, but at least all the links I've hit so far are broken.  I've been using average, no normalization and Windsorized Sigma Clipping with the default parameters but wonder if that is best and if median might be the better combining algorithm.  Now I ran median and average over 100 bias frames from a cooled (-18C) QHY23, differenced the two and took a peek with the statistics tool -- not any significant difference as you can see
count (%)   100.00000
count (px)  8770732
mean        1.253
median      1.115
avgDev      0.879
MAD         0.844
minimum     0.000
maximum     8.230
So, maybe the answer is who cares.  Still I'd be interested in knowing if in fact one is better than the other and if so, under what circumstances the difference will manifest itself in a better image

Hi, I just started a trial of PI a few days ago and seem to have run straight into a bug.  The following information should allow you to reproduce this bug.
Test system: MacBookProRetina 2.6GHz 4-core i7, 16GB ram, I doubt this matters - the system was not under memory or cpu pressure
To reproduce the bug:  (I have the steps close, but not exact, you may need to try these steps in different orders, but I have always been able to freeze PI within a minute or two using these steps)
1) load 2 master dark images (I can send you mine if it should matter, but I doubt it and I doubt the image content matters, I just happened to be using master darks from a QHY23)
2) STF them
3)use PixMath to create a new image that is one minus the other
4) STF new image
5) use image statistics with unclipped option (this seems to be key) and click around on the images
6) rerun the PixMath
7) freeze  -- oops

Pages: [1]