Internally, most processes work with real or complex-valued pixels in either 32-bit or 64-bit floating point format, irrespective of the target image's data format. For 32-bit integer images, all internal calculations are always carried out in 64-bit floating point. The goal is to minimize roundoff and truncation errors, even at the cost of degrading performance. There are exceptions such as LUT-based transformations (histogram and curves for example) working on 8-bit and 16-bit integer images, and temporary working images used for structure or edge detection.
Duplicating the processing on an image at both float32 and int16, and despite some temporary histogram combing and what not on the 16 bit image, in the end, the resulting 8-bit JPEGs were almost identical, impossible to tell by the human eye.
Makes sense?
Absolutely. This happens because 16 bits per sample are just sufficient for the particular image data that you've used and the procedures you've applied. By 'image data' I refer to the integrated (aka stacked) linear image(s), not to the individual frames. The whole data set that you calibrate and integrate is what determines an initial range of numerical values. For this reason it is
very important to obtain the results of the calibration and integration processes in 32-bit floating point format.
Accuracy is critical during the linear phase. This is because when the image is linear, most of the information contents are supported by a narrow range of data values. The more actions you perform with the linear image, the more important becomes numerical accuracy. Once the image is nonlinear, we are closer to the final information representation, the information contents are supported by a wider range of values, and hence accuracy is less critical, in general. It depends on the original data and the applied processes. Usually 16 bits are sufficient for most intensity transformations (histograms —if very aggressive transformations are not used—, curves, moderate saturation increments, etc.), and also for algorithms such as unsharp mask, convolutions, etc., applied following 'classical' processing workflows. When more complex processes are applied, such as dynamic range compression, noise reduction, multiscale processing, etc., 16 bits may easily become insufficient, even if the original integrated data can be represented with 16 bits per sample.
The problem with insufficient numerical accuracy is
posterization. Posterization occurs when a process generates more intermediate values (between existing pixel values) than what the employed data format can provide. For example, noise reduction may generate extremely smooth gradients. Those gradients are 3D functions requiring a large amount of discrete values to be represented accurately. If too few discrete values are available, then the representation of the gradients may lead to visible areas of constant brightness. The problem tends to accumulate and get worse as successive processing steps are being applied.
Judging the importance of a wide numerical range for processing in terms of 'visible differences in the final 8-bit image' is just an evaluation criterion, not the only one. In my opinion, even if the final results may look the same on 8-bit output media, the 32-bit floating point or integer format is absolutely necessary but for trivial works. The 8-bit limitation of most output devices is something that happens today. Think that your final processed images are information objects that will persist for a long time —at least while you can't repeat an image of the same region of the sky.