Given the very nature of astroimaging, an argument could be made that every image that involves more than just stars is HDR.
I like this idea. We usually make the distinction between HDR images and
HDR problems. The former are images where two or more exposures of the same subject and different durations are combined to cover a wide range of tonal values. The concept of HDR problem is more general and applies when a brightness range---not necessarily very large---is being represented by a large set of numerical values in a single image. HDR problems happen all the time in astrophotography;
this image is a nice example.
As for the need for 64-bit images, from the end user perspective they are mainly used to store large HDR compositions. See
this post for a good example, where the HDR composition occupies about 2
30 discrete sample values (> 2
24, which is the capacity of 32-bit floating point). In this case the 32-bit unsigned integer format, also supported in PixInsight, would suffice. However, working with floating point data can be more convenient, especially taking into account that most internal operations work with real and imaginary numbers. For this reason the HDRComposition generates 64-bit floating point images by default.
The extended 64-bit precision can also be necessary to work with images numerically as pure data objects, instead of representable images, but this isn't something that most users have to care about.