Hi Kevin,
Well, I believe (but would be happy to stand corrected) that your INTEGER-based FITS-format image, if it is a Bias image, may well be quite close to the truth.
The header reports that the camera source was a 16-bit source, and that the data in the frame had a maximum value of 411ADU, and a minimum value of 159 ADU (giving an estimated mean value of around 285ADU, assuming an even and balanced distribution - which is probably not the case).
Now, you can assume that, for all intents and purposes here, a Bias frame really represents the 'Black Point' for your data. In other words, you can't really get a 'darker' or 'blacker' image. What you now need is a super-saturated image - one where you really can't get it much 'brighter' or 'whiter' - and this would let you get the 'White Point' for your camera. (** see later)
Given that you already have a Float value for your Dark Point (the lower of the two values in the Float32 image, you just need to get the White Point in the same way as you did for the Int16 image.
Remember, in both cases the full range is 65536 separate values - from 00000 to 65525 for your Int16 data, and some as yet undefined range of the same 65536 'buckets' for your Float32 data. And, therein lies the problem.
You could now estimate the 'size of the buckets' for the Float32 data, but (as far as I can see) this would be a 'guess-timate' at best, and totally erroneous at worst. For example - let us say that you assume (we are scientists, we shouldn't be making 'assumptions'
) that both Int16 and Float32 share the same ZERO point (there is absolutely no reason why they would have to, and no evidence whatsoever that they do).
This would suggest that the bucket-size for the Int16 data is 1 (159 ADU counts / number of buckets) - which is nearly always the case for Int data. However, approaching the Float32 data the same way gives us ~169.2 Float32 buckets per Int16 bucket (269 / 159). This would mean that you should get a value of ~110,875.4 for your Float32 White Point.
However - another 'assumption' has been made (did you spot it?), and that is that the saturated White Point (the highest ADU count that a camera can give - the full-well ADU) is, in fact, going to be 65535 for an Int16 camera. Unfortunately, it isn't! It might, for example be 45678ADU (picking a totally random number), the ADU count that you might always get once you shine a bright enough light source at the imager for long enough. So, you can get a reading of 45678ADU, but you would never be able to get up to 65535ADU.
Does any of this make sense? Have I made any glaring omissions or mistakes?. I hope it's helping someone - when I had to get my head round the wierd method used by Meede, I was 'on my own' (save for the Mike Weasner Mighty ETX site and help from others setting off down the path of astroimaging - so I feel your pain!!!)