Dear members,
any idea how the real bit depth of a sub frame could be calculated using e.g. pixel math or a self-written script?
In detail, I want to determinate the absolute number of different values used in a mono or RGB subframe (per channel)
1) as strongly believe to not always have to use 32bit floating for saving an image
2) as we have a nice dicussion in our local astro community regarding gain settings.
For example a 16bit image can store 65535 values but there are only 30587 in the subframe.
In the HDR composition process for example an initial quantization is given out at the console, what's the calculation behind?
Thanks in advance and best regards,
Alex