Hi Tyrell,
Here are side-by-side comparisons between the linear images with automatic STFs enabled and the same stretch applied with HT:
With your 32-bit floating point XISF image:
https://pixinsight.com/forum-images/20190729/stf/01.pngWith your 16-bit FITS image:
https://pixinsight.com/forum-images/20190729/stf/02.pngBoth screen representations are essentially identical in both cases, as expected.
As for (usually small) visible differences between a linear image with STF enabled and the same nonlinear transformation applied with HT to the same image, there are three causes on current PixInsight versions:
- For reduction screen zoom ratios (1:2, 1:3, ..., 1:100) STF is always applied to the subsampled data, that is, to the pixel data interpolated at the screen representation scale. However, when the same transformation is applied via HT, it is evaluated pixel-by-pixel on actual image data. This leads to small differences because input pixel values are different in each case.
- For reduction screen zoom rations under 1:2 (1:3 ... 1:100), pixel data are interpolated using a very fast but inaccurate sparse image subsampling algorithm. Fast sparse subsampling is less accurate as the zoom ratio decreases (hence you may observe more significant differences for zoom ratios under 1:5 or so), but is extremely fast for screen image visualization. This default behavior can be disabled with
Preferences > Miscellaneous Image Window Settings > Use fast screen renditions. However, disabling this option is not recommended because it may degrade performance of the user interface considerably, especially if you don't have a fast machine (CPU benchmark indexes above 12000) and work with large images.
- STF is always applied through lookup tables (LUT) at 16-bit or 24-bit resolution (depending on the 24-bit LUT STF setting for each image), while HT is always applied directly to 32-bit and 64-bit integer and floating point pixel data. This may cause visible differences for 32-bit integer and 32/64-bit floating point images, as a result of roundoff and truncation errors. However, normally these errors are negligible and undetectable visually, except for HDR images as posterization artifacts (and 24-bit STFs solve posterization problems in virtually all practical cases).
Sorry for the slightly technical explanation, but there is no other way to describe this efficiently. Representation of large images on the screen is not a trivial task, especially for an application with the performance of PixInsight, where you can work with huge linear images in 5 pixel formats under a fully color managed environment. This requires a well balanced, carefully designed image navigation and visualization interface, where screen representation accuracy has to be sacrificed to some extent for the sake of performance and usability.