NOTE: The HDRComposition script has been superseded by the new HDRComposition tool. More information here:
http://pixinsight.com/forum/index.php?topic=2320.0
==================
Managing high dynamic ranges has never been so easy. We present here a HDR image compositing tool that, starting from raw calibrated data, allows us to achieve the result below, in just a few mouse clicks:
This image is composed of exposures ranging from 2.5 second to 30 minutes, acquired with a Canon 300D and FC-100 and 80ED telescopes by José Luis Lamadrid and me.
This script is based on the algorithm I published on this forum:
http://pixinsight.com/forum/viewtopic.php?t=422http://pixinsight.com/forum/viewtopic.php?t=423The JavaScript implementation of this algorithm has been created by Oriol Lehmkuhl, and it is distributed with version 1.3 of PixInsight.
The HDR Composition AlgorithmThe algorithm composes high dynamic range linear images. This means that, for the whole dynamic range of the objects, linearity is preserved. So having a HDR linear image is equivalent to having a digital sensor with a multi-million electron photosite well depth.
To compose the HDR image, each indivual image is multiplied by a factor to match the flux of the other images in the stack. The final result is like a
fragmented pyramid, where shorter exposures are superimposed over saturated areas of the longer ones:
To find the fitting factor between images, the algorithm compares the illumination differences in the same image regions between successive image pairs. For the example above, the 30 minute exposure has been multiplied by 0.004 to fit the illumination level of the 2.5 second exposure.
The Script InterfaceThe script interface is based on rather simple concepts. You have three main sections:
The image selection interface, where you can select the images to combine.
The parameters to control the HDR composition algorithm.
Post-processing options to compress the dynamic range of the resulting HDR image.
First we must tell the script which images we want to combine. This is done by simply clicking the Add button and selecting the desired riles. The script will determine automatically which images have shorter and longer exposures.
The images being combined must be properly calibrated and registered. Keep in mind that this script doesn't performs the calibration and registration works. It only does the HDR composition and dynamic range compression work.
The algorithm parameters are divided into three sections. The first parameters are related to the intensity maps the we use to compare illumination levels between images. We need to define the selection limit for pixels between two illumination ranges. One illumination range must be higher than the other, thus we will name the two ranges as
highlights and
lowlights. The default parameters will usually do a good work; they set the highlight illumination limits to [0.6,0.8], and the lowlight ones to [0.2,0.4], being both intervals expressed in the [0,1] range. Keep in mind that we must know exactly what's the difference in illumination between the highlights and the lowlights, so it is a good practice to establish the lowlight high limit below the highlight low limit.
As we have said, the algorithm superimposes the shorter exposures over the saturated areas of the longer ones. For setting the illumination threshold of the longer exposure in which the shorter exposure is superimposed, the script builds a binary mask. Usually the default 0.8 value works fine. As always, accuracy is limited by noise, so the two images will never fit perfectly; the parameter
Mask smoothness will smooth the joints between image pairs. Usually this is a very accurate algorithm, limited by the noise and sensor linearity, so smoothing the mask by removing only 2 - 3 pixels will work in most cases. The
Mask growth parameter expands the mask to cover bloomings correctly, which is crucial if we are working with non-ABG CCD sensors.
We have three output image options. For very large dynamic range images (as the M42 presented at the beginning), working in the 64-bit floating point format is advisable. You have also the option to retrieve the calculated masks, for your convenience. Finally, there is an option to neutralize the sky background; this works very well for astronomical images, but usually makes no sense when composing daylight HDR images.
The third section performs the dynamic range compression through two basic steps:
An image histogram modification to raise the illumination of the darkest areas.
A HDR Wavelet Transform to compress the dynamic range of the HDR image.
Being the composition result a HDR linear image, we need a very small value for the midtones balance. The example M42 image has a 0.0002 midtones balance applied.
For a quick dynamic range compression, the script allows us to control two main parameters of the HDRWaveletTransform tool: The number of wavelet layers to be flattened, and the number of algorithm iterations. When we activate the post-processing options, the script will produce two images: the original HDR linear composition, and the processed image. These post-processing options let you see quickly if all went fine, so you can work on the original HDR composition to fine tune the results through PixInsight tools.
ExamplesHere we present three additional examples of results obtained with this script. The first one is the Cat's Eye nebula with data acquired by Romano Corradi through the Nordic Optical Telescope at the Roque de los Muchachos Observatory, in the Canary Islands (Spain).
These data can be downloaded here:
http://www.ing.iac.es/~rcorradi/HALOES/.
The result is the combination of various exposures through O-III (teal), H-alpha (orange) and N-II (red) narrow filters with the ALFOSC camera:
O-III: 6x5 min, 1x2 min and 1x30 sec.
H-alpha: 3x10 min, 1x100 sec.
N-II: 3x10min, 1x1 min.
Applying two iterations of the HDRWaveletTransform process with three layers, we can have the nebula structure exposed throughout its whole dynamic range:
To see this image at full resolution:
http://forum-images.pixinsight.com/legacy/HDRscript/ngc6543not2.jpg(Note the bright and large haloes caused by the interference filters located near the telescope's focal plane).Below you can see how the algorithm recovered the blooming areas. As the shorter exposures have a lower signal to noise ratio, we can see these areas with a higher contribution of the noise. But scaling of the exposures has been done perfectly, because any artificial illumination step can be seen in the result:
The second example is the NGC 7331 image done with the 3.5 meter Zeiss telescope at Calar Alto by Vicent Peris. The image was acquired through B and V Johnson filters and a r' SDSS filter. For each filter, ten and one minute exposures were acquired.
Here are the original stretched images:
And a crop on the nucleus of the original linear images, where we can see the bloomed galaxy core:
This is the result after composing the HDR image and postprocessing it with HDRWT and a moderate histogram stretching:
The last example is a "daylight" photo (it's a street at night) taken by Rogelio Bernal with a Canon 40D DSLR. Here is the original sequence of linear images:
Here you can see the composed HDR image with raised midtones:
And the processed HDR image:
With the help of HDRWaveletTransform and color saturation curves, we can recover the entire tonal range of the scene.
ConclusionThis script gives a one-minute solution to one of the hardest problems in astronomical image processing. But most importantly, this solution is based on simple but efficient algorithms, in a quest for absolute respect to the acquired data. Human intervention is reduced to a minimum, and the HDR composition process gives a resulting image of the object "as is". We no longer need "expert" but doubtful "copy and paste procedures": just have your real data in a few mouse clicks. Let's concentrate on extracting all the beauty from our images.
AcknowledgementsSpecial thanks to Rogelio Bernal, who let us experiment with his raw data, and to Juan Conejero, who kindly revised the script's source code.
Vicent.