Author Topic: Compositing HDR Images, Second Part. -- English version  (Read 11898 times)

Offline vicent_peris

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 988
    • http://www.astrofoto.es/
Compositing HDR Images, Second Part. -- English version
« on: 2007 November 10 08:56:17 »
We are going to join this two images of the comet Holmes, taken by my friend José Luis Lamadrid and me:

30x15”:



26x1':




The RGB channels of these two DSLR images have been rescaled to have the proper color balance, with a proportion of 1:2.22:2.86.

As you can see, the one minute exposure has the inner coma completely saturated, and we are going to use the 15 second exposure to recover information lost by the camera's limitations.

The first step is to calculate the fitting factor between the two images. To do this, we need to know the illumination of three regions of the images: two differently illuminated zones of the comet, and the sky background level.

We  create in the one minute image two small previews that will be the illumination references for the comet. It's important to avoid too highly illuminated pixels (due to a possible non-linearity of the sensor, specially with ABG enabled ones), or saturated stars. For a better vision of the image, we can adjust the ScreenTransferFunction, as pictured below:



We must take care of putting the low illuminated preview in an area with sufficient signal, because this preview will have a much large amount of noise in the short exposure image. In this case, the mean value for the previews in this image are:

_1min_high preview:

R: 0.2489940927
G: 0.3614411545
B: 0.4342339171

_1min_low preview:

R: 0.1453883686
G: 0.2159542766
B: 0.2605371721

Now, we must define one preview over a sky background region. Therefore, we are going to do a rather agressive STF:



Ok, now we have defined the three regions we need, but we must compare them with the information of the fifteen second exposure. Just drag and drop the preview selector (the vertical tab with the preview identifier) to the view selector tray of the other image to duplicate the previews:



Convert these previews as independent images, dragging them over the background of the application. We can rename the identifiers of the new images, as seen below, and iconize them, because we wont need the look at these images anymore:



Now, go for fun with the maths. We will directly scale the one minute exposure to fit it to the fifteen second one. Obviously, we will use the PixelMath module. The equation we have to write, according to the identifiers we're using, is below:

((_1min-Med(_1min_bg))*((Avg(_15sec_high)-Med(_15sec_bg))-(Avg(_15sec_low)-Med(_15sec_bg)))/((Avg(_1min_high)-Med(_1min_bg))-(Avg(_1min_low)-Med(_1min_bg))))+0.05

This equation will multiply the one minute image by the fitting factor. Some notes on the equation:

For the comet regions, we calculate the average (Avg function) pixel value, because we want to know the total amount of light the camera is detecting. But, for the background region, we calculate the median (Med funcion) value to prevent error measurements due to noise and stars in the area.
In the equation, we apply the fitting factor to the sky background substracted image, and after we add a little pedestal (here of 0.05) to preserve all the information in the faintest areas of the image.
Of course, we must desactivate the “Rescale result” option!


We will send the result to a new image, named _1min_rescale:



The resulting image, below, is very dark, as we are multiplying it by roughly 0.25, and has the median values of its RGB channels at 0.05:



At this point, we are ready to join the two exposures. Or not? To cover the saturated area of the longer exposure with the information of the short one, we can do a maximum operation. But doing this on the whole image is a very bad idea, because the fifteen second exposure has a lot more noise in the less illuminated areas than those in the one minute exposure. So we need a mask!

We need only to recover the information in the areas where at least one of the three RGB channels is saturated. The first step for making the mask is to calculate a black and white image where each pixel is the maximum of each of its previous RGB values. The equation in PixelMath is rather simple:

Max($target[0],$target[1],$target[2])

The output of the PixelMath instance will be a grayscale image, with the “HDR_Mask” identifier. We must apply this calculation to the original one minute exposure:



This is the resulting image:



Once we have the desired B/W image, we must decide where is the illumination limit, in wich we will superpose the short exposure image. This can be accomplished with a curve transform. In this case, the limit will be at 0.7 pixel value, with a transition zone of +-0.05. This transition is important to mitigate any small amount of error in the fitting factor. Due to the threshold nature of this mask, I think it's better to make the curve with a linear interpolation:



After aplying the curve transform, we have this image:



It's convenient to make the mask a bit smoother. This is easily done with the À Trous Wavelet tool, disabling the first layers:



This is our final mask:



Finally, we can activate the mask on the one minute scaled exposure and superpose the fifteen second image over it. To do this, we simply must substitute the one minute exposure with the fifteen second one; it's importante to substract the background level of the fifteen second image and add the same pedestal (0.05), to fit it to the other image:




This is our final result:



If we raise the midtones of the image, we will see better the whole dynamic range:


Offline twade

  • PTeam Member
  • PixInsight Old Hand
  • ****
  • Posts: 445
    • http://www.northwest-landscapes.com
Compositing HDR Images, Second Part. -- English version
« Reply #1 on: 2007 November 10 14:15:26 »
Vicent,

Excellent tutorial!

The equation you used to combine the images, samples, and backgrounds looks a little different from what you explained in part 1.  I'm trying to figure out what the equation should look like when combining three separate images.  Could you give an example of the equation using any number of images and possibly show me what it would look like for three images?

Thanks,

Wade

Offline vicent_peris

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 988
    • http://www.astrofoto.es/
Compositing HDR Images, Second Part. -- English version
« Reply #2 on: 2007 November 10 16:34:54 »
Hi Wade,

thanks you.


the difference between both equations is that in the second is the first one (the fitting factor) multiplied by the long exposure image, plus the addition of a little pedestal to avoid clipping in the shadows.

For combining three images, you have to repeat this process... Imagine you have 1', 4' and 16' exposures. First you calculate the fitting factor between the 1' and 4' exposures, and combine them. Afters, you will calculate the fitting factor between the 1' - 4' combined image and the 16' one. Roughly, you will divide the 4' exposure by 4 and the 16' by 16.

Of course you must make a second mask for the saturated areas of the 16' image. This can seem complicated, but once you get in practice you will do the work more and more quickly.

Sometimes, if the dynamic range is very high, you must consider doing the operations in 64 bits per channel, because after combining all the exposures you will end applying a midtones transfer function to the whole image of perhaps 0.00001.

I know this is a tedious processing technique, but I'm sure you will rediscover astrophotography. ;-)


Good luck.
Vicent.

Offline twade

  • PTeam Member
  • PixInsight Old Hand
  • ****
  • Posts: 445
    • http://www.northwest-landscapes.com
Compositing HDR Images, Second Part. -- English version
« Reply #3 on: 2007 November 10 21:59:47 »
Vicent,

> Finally, we can activate the mask on the one minute scaled exposure and superpose the fifteen second
> image over it. To do this, we simply must substitute the one minute exposure with the fifteen second one;
> it's importante to substract the background level of the fifteen second image and add a pedestal (here of
> 0.05), to fit it to the other image

After reading this sentence, it seems as though we are missing something in the PixelMath expression.  I don't see any mention of the 1-min exposure in the equation.  Every time I execute your example, I get the original short exposure returned plus the pedestal.  Did you forget to mention one of the processing steps?

Thanks,

Wade

Offline David Serrano

  • PTeam Member
  • PixInsight Guru
  • ****
  • Posts: 503
Compositing HDR Images, Second Part. -- English version
« Reply #4 on: 2007 November 11 01:53:05 »
Quote from: "vicent_peris"
I know this is a tedious processing technique


Hmm... doesn't this ring any bell? ;)
--
 David Serrano

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Compositing HDR Images, Second Part. -- English version
« Reply #5 on: 2007 November 11 02:05:25 »
Riiiiiiinggggg!!! Script Power!  :lol:
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline vicent_peris

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 988
    • http://www.astrofoto.es/
Compositing HDR Images, Second Part. -- English version
« Reply #6 on: 2007 November 11 02:17:24 »
Quote from: "twade"
Vicent,

> Finally, we can activate the mask on the one minute scaled exposure and superpose the fifteen second
> image over it. To do this, we simply must substitute the one minute exposure with the fifteen second one;
> it's importante to substract the background level of the fifteen second image and add a pedestal (here of
> 0.05), to fit it to the other image

After reading this sentence, it seems as though we are missing something in the PixelMath expression.  I don't see any mention of the 1-min exposure in the equation.  Every time I execute your example, I get the original short exposure returned plus the pedestal.  Did you forget to mention one of the processing steps?

Thanks,

Wade


Wade,

are you talking about this expression?:

_15sec-Med(_15sec_bg)+0.05


In this expression should appear only the 15 second image, because we are going simply to susbtitute the 1 minute scaled image by the 15 second image but only on the saturated areas! So, first you must activate the mask over the 1 minute scaled image, and after apply to this image the PixelMath instance.

Of course, you think the result you are viewing is the short exposure. :-) And it is! But, if you raise the midtones level, you will find in your new image the longer exposure information. ;-)


Regards,
Vicent.

Offline vicent_peris

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 988
    • http://www.astrofoto.es/
Compositing HDR Images, Second Part. -- English version
« Reply #7 on: 2007 November 11 07:05:56 »
Hello,

I have edited the text to correct some errors on it.


Best regards,
Vicent.

Offline David Serrano

  • PTeam Member
  • PixInsight Guru
  • ****
  • Posts: 503
Compositing HDR Images, Second Part. -- English version
« Reply #8 on: 2007 November 11 09:11:57 »
I've read this again, slowly, from a scripter POV ;^). I think I'm leaving at least one thing up to the user, maybe two. This is the workflow that I'm planning:

1.- Have the user open all images involved.
2.- Let them select the "high", "low" and "bg" previews on one of the images. This is the main thing I don't know how to implement. It seems very hard to me to find the proper areas to work on, even in a simple image like this. Imagine a M42 one ;).
3.- Here begins the work of the script. The interface is quite simple: just a list of textboxes where the user will enter the images and the exposure time of each one. The time will only be used to sort the images, although maybe it isn't absolutely required (the script could calculate the average of each image to discover it).
4.- Replicate the previews on all images.
5.- Calculate their statistics: averages of all "high" and "low" previews, and medians of all "bg" ones.
6.- Calculate fitting factors for each image and multiply them.
7.- Build mask from the longest exposure image. This is why it's convenient to sort the images, since we could work painlessly with a sorted array.
8.- Apply curves and ATW to it. That 0.7 bugs me... it's the "maybe second" thing to be left to the user.
9.- Use as mask and put the next (second longest exposure) image on top of this (longest).
10.- Discard mask. Return to step 7 as many times as necessary (using a different image each time, in decreasing order of exposure time).
11.- Issue a Blue Screen of Death to conmemorate the success of the operation :).

Suggestions? Improvements? Flames?
--
 David Serrano

Offline vicent_peris

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 988
    • http://www.astrofoto.es/
Compositing HDR Images, Second Part. -- English version
« Reply #9 on: 2007 November 11 09:50:38 »
Quote from: "David Serrano"
I've read this again, slowly, from a scripter POV ;^). I think I'm leaving at least one thing up to the user, maybe two. This is the workflow that I'm planning:

1.- Have the user open all images involved.
2.- Let them select the "high", "low" and "bg" previews on one of the images. This is the main thing I don't know how to implement. It seems very hard to me to find the proper areas to work on, even in a simple image like this. Imagine a M42 one ;).
3.- Here begins the work of the script. The interface is quite simple: just a list of textboxes where the user will enter the images and the exposure time of each one. The time will only be used to sort the images, although maybe it isn't absolutely required (the script could calculate the average of each image to discover it).
4.- Replicate the previews on all images.
5.- Calculate their statistics: averages of all "high" and "low" previews, and medians of all "bg" ones.
6.- Calculate fitting factors for each image and multiply them.
7.- Build mask from the longest exposure image. This is why it's convenient to sort the images, since we could work painlessly with a sorted array.
8.- Apply curves and ATW to it. That 0.7 bugs me... it's the "maybe second" thing to be left to the user.
9.- Use as mask and put the next (second longest exposure) image on top of this (longest).
10.- Discard mask. Return to step 7 as many times as necessary (using a different image each time, in decreasing order of exposure time).
11.- Issue a Blue Screen of Death to conmemorate the success of the operation :).

Suggestions? Improvements? Flames?




Hello, Scriptator,

I think the better option is to let the user define the illumination and background previews. Perhaps in the future we could do this automatically, but for the moment this would be great. The other parameter, the threshold level of the mask should be defined by the user too, but you need another parameter: the amplitude of the transition zone. And perhaps the level of smoothing is necessary too as an option.

One problem... perhaps is better, for the moment, write the script to work only with two images. There is a small detail:

Imagine you have 40', 10' and 3' exposures. To combine all the images, you must define de image regions to join the 40' and 10' exposures. BUT you will need *different* image regions to join the combined 40' - 10' and the 3' exposures.

Of course, yo can define three regions in the 40' image, for the 40' and 10' ones, and another two additional for comparing the illumination level between the 10' and 3' images.

You will see... As you want, as you're Scriptator...

I think it's easy to run the script two times; the first for the 40' and 10', and the second for the 40'-10' and the 3'.


What dou you think?
Vicent.