Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - rgbtxus

Pages: [1] 2
1
Image Processing Challenges / Re: A7S banding/frame split help
« on: 2016 March 22 13:57:13 »
Great.  Glad to hear it.  Yes, there is no lossless compressed RAW so you take a big transfer and storage hit choosing uncompressed over 11+7.  I have considered getting an a7s for AP (I currently use a Nikon D800 with the IR filter replaced and a QHY23 for mono work), but frankly could not get past living with 11+7 (which I could now do at a transfer time cost) and the fact that since there is not (I assume) an SDK so none of the programs like SGP support it.  I can't imagine not using SGP or something similar.  Anyway, great to hear you found a solution.  I look forward to seeing how the a7s performs for AP.

2
Image Processing Challenges / Re: A7S banding/frame split help
« on: 2016 March 22 10:40:14 »
Sorry, I was just quickly passing by and have not thought about this but you might want to read this http://diglloyd.com/blog/2014/20140212_2-SonyA7-RawDigger-posterization.html  Sony has been using a very poor way of storing RAW data which has been shown to generate visible artifacts in normal photography.  I am not sure if this contributes to your issue, but I thought I would point it out.  Now, I believe Sony has recently released firmware (not sure if it applies to the a7s) that allows you to avoid this problem by saving uncompressed RAW files.  Why they did not do lossless compressed is a mystery.  As I said, this may not be relevant to your problem, but I thought I'd pass it along.

3
So, I waited months before upgrading to Yosemite assuming that I would avoid the early bugs -- guess not.  Somehow I forgot to test PI before switching.  The first thing I found is at dragging a file over the window does not work.  I understand that the reason is understood and while I did not see this said I presume some future version of PI will fix that.  The more disturbing issue is this "mouse disappears" issue.  I can contribute a bit of info that might be of help.  I have only just played with PI a bit under Yosemite so this is a preliminary report.  I am running 01.08.03.1123.  The latest java sdk was installed (jdk-8u31-macosx-x64.dmg) as Yosemite seems to no longer install java and my bank needed it.

1) for some reason invoking PI from the spotlight search bar or double clicking on it's icon in the applications folder does not reliably start PI.  In fact, it seems to start it every other time!!!!!!  Or so I thought.  In fact it looks like if you try to restart it before the app is completely closed, but after its windows are all closed then it does not restart.  This is probably not a real issue.  But if you left a window up until the very end of shutdown the user might not be surprised with it seeming to not restart.  Since I alway auto hide my application bar it took me a few times to catch on.

2) If I double click on the background and then open a file from the file menu.  The file does not appear on the workspace, but I can select it with the widget in the lower left of the bottom bar.  It then becomes visible but the name tab has a blue background and the mouse cursor disappears over the image making PI unusable for many operations on the image.  Cloning it generates another unusable "blue tab" image.  Probably this is just some mode I'm unaware of and have not yet figured out how to toggle.  Once you do that once, you seem to be screwed.  Opening a file with the file open dialog box gives you the same blue labeled unusable image.  You are doomed to have to shutdown PI . Now, if you wait until PI is totally shutdown and restart it and remember to NEVER drag an image over its window -- this will be very hard to remember because that is how I always use it and NEVER double click on the backgound to open a file but alway proceed from the file/open dialog from the menu bar, then it looks like you are alright.  I am mystified as to why the later two paths behave differently.

4
Thanks, that sounds perfect.  I probably would have just used a frame from first night for reference, so thanks for the save <G>

5
Sorry, not sure if it matters, but I just noticed I forgot to specify that night one's images were 10min.  So i'm looking to combine some frames at 10min with others at 15min.

6
I will try to figure that out (newbie here), but on the surface (again, newbie), this seems like an incorrect approach to my problem.  I increased exposure on the second night because it was clear I had plenty more headroom  If fact on night 2, the max value was .89 -- nothing was blown out.  The purpose of HDR, at a basic level, is to replace blown out areas with non blown out data from short exposures.  This is not my use case at all.

7
I collected one set of Ha frames one night and another set of 15 min frames the next night.  I have flats for each night.  If I register and calibrate each set (registering them all against frame 1 of set 1) and then toss all the resulting frames into ImageIntegration, does it just figure out how to combine them appropriately?  Or, do I do something like ImageIntegration on set one, run LinearFit over each frame in set two using the integrated set one frame as the reference and then reintegrate all the frames (calibrates & registered from set one, calibrated, registered, fit from set 2)?
Richard

8
Well, I tried bumping open files to 1024, then your value of 4864.  That prevent a crash but it errored out with "Unable to open FITS file ...." on the 228th file.  So, I guess I'll have to look elsewhere.
The message was
PCL FITS Format Support: Unable to open FITS file:
/Users/rgb/AP wip/M81 - 2015-01-17/bias/B_M81__0s_2015-01-18_04-25-31.fit
CFITSIO error message stack:
01 : failed to find or open the following file: (ffopen)
02 : /Users/rgb/AP wip/M81 - 2015-01-17/bias/B_M81__0s_2015-01-18_04-25-31.fit

Of course I checked, the file is there and can be opened

Thanks for looking at this.  Looks like pfile may be dead on.  OSX may have some other open file limitation so raising ulimt -n may not be helping.

Richard

9
thanks, I'll try that

10
Ok, but I'm surprised on two fronts: 1) I could swear I integrated 400 bias frames a few weeks ago, and 2) your answer seems to imply that PI keeps all the files open while processing them, that seems unlikely to me (but I'm just a newbie and fully expect the flaw is my expectations) and I reran it on exactly 200 files (a number I picked out of a hat) and it worked.  Are you sure that is what is happening here -- and could better error handling prevent  PI from crashing if that is the case?

Do you have a suggestion for how to combine the data piecewise?  I can think of things to do, but I'm likely to make some incorrect assumption.

Thanks,
Richard

btw:  I do run linux file servers in my house so I guess I can always do the work there.  I believe I read that the image files are stored in a machine independent (or at least byte order designated) form so you can swap the files around systems.

11
This crash is repeatable on my machine with my data with an intervening reboot.
After the reboot I tried integrating 4 frames which worked fine.
By eye and memory it died initially after reading about 70-80% of the files.
I then retried it using the last 200 files (includes the file it was reading when it crashed)
That worked properly
Out of memory or stack kind of issue?
Let me know if you need anymore information
I have attached a document with the OSX crash dump info for both crashes
Richard

My parameters for ImageIntegration were as follows:

Combination: Average
Normalization: No Normalization
Weights: Don’t Care (all weights =1)
Scale estimator: Median Absolute Deviation from the median (MAD)
only integrate checked
Pixel Rejection (1):
Winsorized Sigma Clipping
No Normalization
Check all boxes
Pixel Rejection (II):
Sigma Low 4.0
Sigma High 3.0
Range Low 0.0
Range High 0.98

12
General / Re: Maximizing SSDs
« on: 2014 December 20 17:05:13 »
I cannot speak to PI, but I can give you a few pointers in general.  Since I don't know what OS you are running I can't really say anything specific but all of what I am about to mention is trivial in Linux using builtin software services and I'm pretty sure there are off the shelf boxes that can make these arrays for you with little fuss for connection to other types of machines.  On the MAC you can use ThunderBolt to achieve very high transfer rates in and out of your box and USB3 follows not all that far behind.  I don't know anything about the windows world but I assume all of these things can be done -- I just do not know if its built into the OS as it is in Linux or whether you need to buy special SW to create and manage these arrays.  To get maximum speed you stripe n SSDs of capacity c and each write/read is broken up into stripes that are spread across the n drives and then written/read in parallel so you get approximately n times the throughput of a single SSD and n*c the storage.  This is RAID 0. The thing to be aware of is any error kills the whole array.  When you want high availability and high speed the usual approach is a stripe of mirrors, RAID 1+0.  The minimal configuration uses 4 drives of the same size organized as 2 pairs.  The pairs are mirrored (RAID 1) and the mirrors are striped (RAID 0).  Now any drive can fail and the array continues unscathed and un degraded in speed. So, you get 2n speed and 2c capacity but at the cost of 4c drive space.  There are numerous other raid levels but they trade speed for space and so are probably not what you are talking about.  My home server, which is more optimized for space and reliability than speed, is built on ZFS running on Linux, which while different in subtle and important ways from traditional RAID, can be thought about in terms of RAID.  I run 2 stripes (like RAID 0) of 6 discs in the zfs analogue to RAID6.  So out of each of the two stripes I can suffer a total failure of any two drives in each stripe without losing any data.  In other words I can lose any 2 of my 12 discs and many different combinations of 3 or 4 discs without data loss.  The cost for this is trading time for space.  I have 12c of storage physically, but trade 4c of that away for peace of mind.  This configuration requires calculating two parity syndromes for each block in each stripe and writing to all 12 disks to write a block of user data.  BTW I use naked Samsung EVO 1TB (not Pro) drives for all my temporary storage needs -- just pop one in a TB or USB3 adapter and I get great performance out of my laptop.  If I understand correctly, there are two basic parameters distinguishing SSDs: what percent of cells are set aside for replacement when cells wear out and what technology (SLC, TLC,MLC) is used where the difference is bits per cell(1,2,3). The top end SSDs use SLC and have a high percentage of cells set aside.  I believe the Samsung 840s, which I use, use the "worst" of the technologies, TLC.  Where worst is from the perspective of endurance.  Common estimates put P/E cycles at 1K for TLC, 3K for MLC and 100K for SLC. The 840 Pros use, MLC.  So, you do buy endurance with the Pros.  If you look at http://www.anandtech.com/show/6459/samsung-ssd-840-testing-the-endurance-of-tlc-nand you will see that a "crappy" 1TB TLC 840 non Pro you should be able to write ~ 100GiB day in and day out for 23 years before you exhaust its lifetime.  So, I think I'm good with non Pros.  In my case the frequency of writes is low so there is no need to pay a premium and/or give up storage to use the Pros over non Pros.  I will confess that I have been using non Pro SSDs as caches in my ZFS configuration for years without issues -- but the load on my server is just me.   I'm sure there are work loads where it is crucial to have the SLC devices -- it is not in the least clear to me that PI will put the kind of load on your SSDs to justify the extra cost.  But I bet the PI wizards can clue us in.  While PI sloshes a lot of data around, that is generally mediated by a lot of computation of what to store and a lot of time we spend scratching our heads trying to decide what function to apply next <G>  To give you a sense of SSD lifetime with one real world example I just looked at one of the 2 120GB SSDs (mirrored, RAID1) that hold the swap partition, root partition, user partition and ZFS cache for my main file server.  This SSD has 14198 hours = 1.621 years of power up time, it has sinked 21TiB of data and sourced 13TiB  (36GiB/day).  It has had 1 reallocated event and estimates that it has 99% of its life left.  So, for my work loads cheap SSDs seem fine.   Now this SSD is a MLC.  If I had a TLC 128Gib installed a back of the envelope calculation would put it's life at about 9 years.

13
General / Re: Problem with "Check for Updates"
« on: 2014 December 18 18:11:04 »
happy to help.  I'm an AP newbie so can't help there but an old hand at computers ;-)

14
Thanks Mike,
Hadn't thought to look under scripts for noise analysis (still feeling my way around PI).  As you suggested running NoiseEvaluation on the mean and median versions of the integrated bias noise showed the mean version to have ?k = 3.066e-05 v.s. ?k = 3.807e-05 for the median version.  Although I confess my brain just started to hurt when I wondered if lower noise in the image representing the noise floor is actually better or worse <G>
Thanks again
Richard

15
General / Re: Problem with "Check for Updates"
« on: 2014 December 18 13:53:51 »
Any chance you have a firewall blocking the access?  You might try toggling your FW off for an instant and hitting check for updates then re engage the FW.   Don't want your pants down for too long <G>, but this will at least rule out one possible cause of PI not being able to access the update site.

Pages: [1] 2