We are going to join this two images of the comet Holmes, taken by my friend José Luis Lamadrid and me:
30x15”:
26x1':
The RGB channels of these two DSLR images have been rescaled to have the proper color balance, with a proportion of 1:2.22:2.86.
As you can see, the one minute exposure has the inner coma completely saturated, and we are going to use the 15 second exposure to recover information lost by the camera's limitations.
The first step is to calculate the fitting factor between the two images. To do this, we need to know the illumination of three regions of the images: two differently illuminated zones of the comet, and the sky background level.
We create in the one minute image two small previews that will be the illumination references for the comet. It's important to avoid too highly illuminated pixels (due to a possible non-linearity of the sensor, specially with ABG enabled ones), or saturated stars. For a better vision of the image, we can adjust the ScreenTransferFunction, as pictured below:
We must take care of putting the low illuminated preview in an area with sufficient signal, because this preview will have a much large amount of noise in the short exposure image. In this case, the mean value for the previews in this image are:
_1min_high preview:
R: 0.2489940927
G: 0.3614411545
B: 0.4342339171
_1min_low preview:
R: 0.1453883686
G: 0.2159542766
B: 0.2605371721
Now, we must define one preview over a sky background region. Therefore, we are going to do a rather agressive STF:
Ok, now we have defined the three regions we need, but we must compare them with the information of the fifteen second exposure. Just drag and drop the preview selector (the vertical tab with the preview identifier) to the view selector tray of the other image to duplicate the previews:
Convert these previews as independent images, dragging them over the background of the application. We can rename the identifiers of the new images, as seen below, and iconize them, because we wont need the look at these images anymore:
Now, go for fun with the maths. We will directly scale the one minute exposure to fit it to the fifteen second one. Obviously, we will use the PixelMath module. The equation we have to write, according to the identifiers we're using, is below:
((_1min-Med(_1min_bg))*((Avg(_15sec_high)-Med(_15sec_bg))-(Avg(_15sec_low)-Med(_15sec_bg)))/((Avg(_1min_high)-Med(_1min_bg))-(Avg(_1min_low)-Med(_1min_bg))))+0.05
This equation will multiply the one minute image by the fitting factor. Some notes on the equation:
For the comet regions, we calculate the average (Avg function) pixel value, because we want to know the total amount of light the camera is detecting. But, for the background region, we calculate the median (Med funcion) value to prevent error measurements due to noise and stars in the area.
In the equation, we apply the fitting factor to the sky background substracted image, and after we add a little pedestal (here of 0.05) to preserve all the information in the faintest areas of the image.
Of course, we must desactivate the “Rescale result” option!
We will send the result to a new image, named _1min_rescale:
The resulting image, below, is very dark, as we are multiplying it by roughly 0.25, and has the median values of its RGB channels at 0.05:
At this point, we are ready to join the two exposures. Or not? To cover the saturated area of the longer exposure with the information of the short one, we can do a maximum operation. But doing this on the whole image is a very bad idea, because the fifteen second exposure has a lot more noise in the less illuminated areas than those in the one minute exposure. So we need a mask!
We need only to recover the information in the areas where at least one of the three RGB channels is saturated. The first step for making the mask is to calculate a black and white image where each pixel is the maximum of each of its previous RGB values. The equation in PixelMath is rather simple:
Max($target[0],$target[1],$target[2])
The output of the PixelMath instance will be a grayscale image, with the “HDR_Mask” identifier. We must apply this calculation to the original one minute exposure:
This is the resulting image:
Once we have the desired B/W image, we must decide where is the illumination limit, in wich we will superpose the short exposure image. This can be accomplished with a curve transform. In this case, the limit will be at 0.7 pixel value, with a transition zone of +-0.05. This transition is important to mitigate any small amount of error in the fitting factor. Due to the threshold nature of this mask, I think it's better to make the curve with a linear interpolation:
After aplying the curve transform, we have this image:
It's convenient to make the mask a bit smoother. This is easily done with the À Trous Wavelet tool, disabling the first layers:
This is our final mask:
Finally, we can activate the mask on the one minute scaled exposure and superpose the fifteen second image over it. To do this, we simply must substitute the one minute exposure with the fifteen second one; it's importante to substract the background level of the fifteen second image and add the same pedestal (0.05), to fit it to the other image:
This is our final result:
If we raise the midtones of the image, we will see better the whole dynamic range: