Author Topic: Inernal processing bit depth  (Read 4244 times)

Offline RBA

  • PixInsight Guru
  • ****
  • Posts: 511
    • DeepSkyColors
Inernal processing bit depth
« on: 2011 February 20 17:25:34 »
If I open a 16 bit int image on PixInsight, do the processes operate at that bit depth internally (say Histogram, ACDNR, etc) for the life of the image, unless resampled to something else?
Same question applies to other bit depths.

I do understand there's a few processes that include options for the internal processing, such as PixelMath's "Use 64-bit working images"... Those, when explicitly indicated to operate at a different bit depth, don't apply to my question.

Thanks!

« Last Edit: 2011 February 20 19:30:10 by RBA »

Offline Carlos Milovic

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2172
  • Join the dark side... we have cookies
    • http://www.astrophoto.cl
Re: Inernal processing bit depth
« Reply #1 on: 2011 February 20 19:20:55 »
It depends on the process. Many of them do some calculations with float or double variables, and then go back to the image's bit deph to change the pixel values. Others, just use a template and works directly with the current bit deph.
Regards,

Carlos Milovic F.
--------------------------------
PixInsight Project Developer
http://www.pixinsight.com

Offline RBA

  • PixInsight Guru
  • ****
  • Posts: 511
    • DeepSkyColors
Re: Inernal processing bit depth
« Reply #2 on: 2011 February 20 20:11:22 »
Got it thanks. That's what I thought it'd be the case...

The reason I asked is because I've been playing with some stuff... Duplicating the processing on an image at both float32 and int16, and despite some temporary histogram combing and what not on the 16 bit image, in the end, the resulting 8-bit JPEGs were almost identical, impossible to tell by the human eye.

Makes sense?





Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: Internal processing bit depth
« Reply #3 on: 2011 February 21 01:56:42 »
Internally, most processes work with real or complex-valued pixels in either 32-bit or 64-bit floating point format, irrespective of the target image's data format. For 32-bit integer images, all internal calculations are always carried out in 64-bit floating point. The goal is to minimize roundoff and truncation errors, even at the cost of degrading performance. There are exceptions such as LUT-based transformations (histogram and curves for example) working on 8-bit and 16-bit integer images, and temporary working images used for structure or edge detection.

Quote
Duplicating the processing on an image at both float32 and int16, and despite some temporary histogram combing and what not on the 16 bit image, in the end, the resulting 8-bit JPEGs were almost identical, impossible to tell by the human eye.

Makes sense?

Absolutely. This happens because 16 bits per sample are just sufficient for the particular image data that you've used and the procedures you've applied. By 'image data' I refer to the integrated (aka stacked) linear image(s), not to the individual frames. The whole data set that you calibrate and integrate is what determines an initial range of numerical values. For this reason it is very important to obtain the results of the calibration and integration processes in 32-bit floating point format.

Accuracy is critical during the linear phase. This is because when the image is linear, most of the information contents are supported by a narrow range of data values. The more actions you perform with the linear image, the more important becomes numerical accuracy. Once the image is nonlinear, we are closer to the final information representation, the information contents are supported by a wider range of values, and hence accuracy is less critical, in general. It depends on the original data and the applied processes. Usually 16 bits are sufficient for most intensity transformations (histograms —if very aggressive transformations are not used—, curves, moderate saturation increments, etc.), and also for algorithms such as unsharp mask, convolutions, etc., applied following 'classical' processing workflows. When more complex processes are applied, such as dynamic range compression, noise reduction, multiscale processing, etc., 16 bits may easily become insufficient, even if the original integrated data can be represented with 16 bits per sample.

The problem with insufficient numerical accuracy is posterization. Posterization occurs when a process generates more intermediate values (between existing pixel values) than what the employed data format can provide. For example, noise reduction may generate extremely smooth gradients. Those gradients are 3D functions requiring a large amount of discrete values to be represented accurately. If too few discrete values are available, then the representation of the gradients may lead to visible areas of constant brightness. The problem tends to accumulate and get worse as successive processing steps are being applied.

Judging the importance of a wide numerical range for processing in terms of 'visible differences in the final 8-bit image' is just an evaluation criterion, not the only one. In my opinion, even if the final results may look the same on 8-bit output media, the 32-bit floating point or integer format is absolutely necessary but for trivial works. The 8-bit limitation of most output devices is something that happens today. Think that your final processed images are information objects that will persist for a long time —at least while you can't repeat an image of the same region of the sky.

 
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline sleshin

  • PixInsight Old Hand
  • ****
  • Posts: 431
Re: Inernal processing bit depth
« Reply #4 on: 2011 February 21 07:25:29 »
Along these lines, if one opens a 16 bit non-linear Tiff image in PI, say an image worked on and saved in Photoshop, should it be converted to 32 bit FP in PI before further processing? If not and further processing in PI is done on the 16 bit image, should that prcessed image now be saved as 32 bit FP? Or does it matter?

Steve
Steve Leshin

Stargazer Observatory
Sedona, Arizona

Offline georg.viehoever

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2132
Re: Inernal processing bit depth
« Reply #5 on: 2011 February 21 07:49:28 »
Hi Steve,

32 bit floats (=single precision) give you 23 bits of mantissa (=significand), meaning they have at least 23 bits of accuracy even if you dont adjust the exponent of the float number, see http://en.wikipedia.org/wiki/Floating_point#Internal_representation . That means that 16 bit integer images easily fit into 32 bit floats with no loss of information at all. The only thing you have to pay for is the 2x factor for memory consumption. Even speed usually is not reduced by this.

I always convert integer images to float as the first step in processing. You never loose precision, and always gain a lot of precision and dynamic range.

Georg
Georg (6 inch Newton, unmodified Canon EOS40D+80D, unguided EQ5 mount)

Offline RBA

  • PixInsight Guru
  • ****
  • Posts: 511
    • DeepSkyColors
Re: Internal processing bit depth
« Reply #6 on: 2011 February 21 13:15:27 »
Judging the importance of a wide numerical range for processing in terms of 'visible differences in the final 8-bit image' is just an evaluation criterion, not the only one.

Absolutely. I have no intention to forcefully reduce the bit depth of my images to int16 after delinearization just because I know the differences in the final 8-bit JPEG viewed on 6~8 bit monitors might be hard to tell.

It was a cloudy Sunday afternoon, fighting a cold, and not a lot of interesting things to do  ;)