Hi Juan,
Thanks for coming back so quickly - I can't run the test just now, but will look at the results this evening.
However, looking at your three-stage suggestion:-
1.) Using the 'manual edit' of the INI file has worked fine - all of my FITS files are handles in the [0, 65535] range, and ALL of them (including BiasOffests, etc.) are correctly rescaled to the [0, 1] range - without the 'median' of the dataset being shifted (as was the case). So - no problems here.
2.) I am not clear in my mind how this process is going to work. I will already have a file on HDD that has the data stored in 32-bit Float. The Min Value will always be 0.0 and the Max Value will always be 65535.0 (obviously, not every image will necessarily attain either of these two 'limits', but these are the 16-bit 'limits' of the CCD digitisation process, with the data then 'translated' into a 32-bit Float range, even though this is '100% wasteful' in terms fo disk space).
If PI is going to then 'translate' the 32-bit Float [0.0, 65535.0] range into 16-bit Unsigned Integer [0, 65535] range, then I assume that PI must 'open' each image - under the rescale rule implemented in step (1) above, and then make the conversion before saving the image in the new, integer, format.
Does this mean that PI takes the (internally converted) [0.0, 1.0] range and 'multiplies' it by '65535' to create the Integer [0, 65535] range before saving it?
Does this also mean that, when PI re-reads the 16-bit Unsigned Integer [0, 65535] format, it does NOT rescale the data, it just applies a simple translation to the internal (working) [0.0, 1.0] range - so ImageIntegration doesn't 'shift the median' (which is the 'effect' that I am unable to overcome at present)?
===========
If the process you describe does solve the problem, then I think that we need to consider which CCDs are affected, and which capture software is involved.
As I have said before - there is NOTHING WRONG with the way that Meade's Envisage software actually stores its FITS data. Yes, I agree that 32-bit Float format is perhaps NOT the most efficient method of storing [0, 65535] 16-bit Unsigned Integer data - but at least Envisage can actually 'get that process right' (the programmers have NEVER addressed the 'fatal' bug associated with the "FITSINT" storage format that they offer - if you use THAT format, all bets are OFF, because it simply just does NOT work).
What I just do not understand is why, when PI sees 'my' 32-but Float file, with data having been stored 'from' a 16-bit unsigned integer 'source', it just doesn't accept the data for what it is. OK, yes, I have to tell PI that the data is 'from' a [0.0, 65535.0] range, and that it should use THAT range to rescale the results to a [0.0, 1.0] range. If that was all that happened, then I would be happy.
And, as I have said, that is certainly what happens if I use PI to 'open' any of my images with these requests in place.
However, if I use ImageIntegration on data (that presumably get translated to thj [0.0, 1.0] range) why is there then this EXTRA step, where the incoming data ALSO gets "ReScaled"?
And, by "ReScaled" I mean that the image Black and White points are examined, such that the 'new' [0.0, 1.0] range actually becomes the image [Black, White] range.
I accept that this is actually a WORTHWHILE step to implement - PROVIDING THE DATA HAS ALREADY BEEN CALIBRATED.
But, you CANNOT implement this [Black, White] rescale step to any 'calibration' data, or even to un-calibrated Lights. Not in my mind anyway - it means that all pending 'data subtraction' steps (Darks from Lights, etc.) will have a meaningless result.
And, I therefore do not see why this problem is associated ONLY with 'my' (DSI) data - surely this is fundamental to other CCD data as well?
My basic question still stands - I absolutely KNOW that the [Black, White] "ReScale" function call MUST be implemented 'somewhere' in the ImageIntegration process - because I see the result in the output image. So, the question is "Where (or When) does it take place?"
And, why will it therefore NOT take place if I have pre-converted my source data set from 32-bit Float to 16-bit Unsigned Integer, as you suggest?
Curioser, and curioser
Cheers,