Author Topic: Compositing HDR Images, First Part. -- English version  (Read 8295 times)

Offline vicent_peris

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 988
    • http://www.astrofoto.es/
Compositing HDR Images, First Part. -- English version
« on: 2007 November 10 05:16:44 »
Hi all,


here we are going to explain, in a simple way, my technique to photograph objects with a dynamic range too large to be accomodated by the camera's dynamic range.

In the field, the techniques are the same of always: we have to make various exposures of different lenghts to cover the entire dynamic range of the object, from the brighter parts to the fainter ones. Of course, the deeper we want to go, the larger will be the dynamic range to capture.

But, where this technique differs from others is the way the high dynamic range image is composed. We are going to synthesize an image that would be similar to the one produced by an ideal camera. This camera would have pixels with a gigantic electron well (in the order of millions of electrons), thus with no virtual point of saturation. We would be able to make long exposures of the scene to capture the faintes parts, but without any saturation of the brighter ones, and the image output would use a data range larger than 16 bits. Throught this technique, I've obtained images with a real dynamic range as high as 24 bit gray levels per channel.

As you are supposing, we are going to compose the HDR image in a completely linear way. Imagine our object being photographed as a pyramid. The height of our pyramid represents the dynamic range of the object being photographed, and we truncate it horizontally in various sections, because our camera is not able to pick up the entire pyramid. We know the total height of the pyramid, but we don't know, a priori, the height of each section. This is the parameter that we are going to determine, and is the key to properly reconstruct our pyramid.

This is the basic principle. Since our camera doesn't have a sufficiently large dynamic range, we are going to superpose the shorter exposures over the longer ones, but only where the longer exposure is saturated. Being each exposure a different section of the pyramid, we have to rescale the longer image to fit it to the shorter one.

Imagine we have a light source with a given intensity. And imagine we make two photographs of this source with a different duration E1 and E2, with a one pixel camera.  If the signal recorded during E1 is S1, then the signal recorded during E2 is given by:

S2 = S1*(E2/E1)

In more practical words:

We take two exposures of the light source: one of 1 second (E1) and another of 3 seconds (E2). If during the first exposure the light produces a signal of 100 electrons, the signal for the second exposure will be of 300 electrons:

S2 = 100*(3/1) = 300 e-

To have two identical, one pixel images, we must therefore divide S2, once digitized, by a factor of three, so we will have the same numerical value in the pixel of both images. This proportion, as we have seen above, is the key to join both images.

To fit both exposures, we need a reference point. As always, in daylight photography we are in a little less than ideal scenario: our reference point is simply the exposure lenght. If we make 1/60s and 1/30s exposures, once image calibration have been made (specially the subtraction of the bias image), just divide the 1/30s image by a factor of two.

But, again, there are several factors that will do this operation useless in astrophotography. Atmospheric extinction, transparency of the sky and sky background illumination can vary between exposures. Therefore, we need to calculate the proportions between exposures directly from the objects photographed.

The idea comes from one technique used to measure the linearity of an image sensor. This idea, adapted to the problem of HDR images, is to measure the difference of illumination between two image regions of the same image, and compare this difference to the difference observed in the same regions of the other exposure. This will give us the proportion between illuminations of the two images. The equation that gives us the fitting factor, F, is:

F = (Image1_region1 – Image1_region2) / (Image2_region1 – Image2_region2)

This will correct for the atmospheric extinction and sky transparency. But will not correct for different sky background levels. So we have to modify this equation taking into account the sky brightness. We must thus substract sky background level of each image area to measure:

F = ((Image1_region1 – Image1_bg) – (Image1_region2 – Image1_bg)) / ((Image2_region1 – Image2_bg) – (Image2_region2 – Image2_bg))

This last equation provides correction for all the factors than differentiate our astrophotos from a daylight picture (excluding, of course, random noise). As it's necessary to view the problem from a practical point of view, we will continue very soon in another post.