When HDRC fails to build a composite image that's because either:
(1) The images are not aligned correctly, which causes LinearFit (which is part of HDRC) to fail.
or:
(2) One of the images cannot add more data to the "HDR pyramid". In other words, one of the images does not provide low-exposure data to cover previously combined long-exposure data. For example, this happens if you try to build an HDR composition with images acquired with the same exposure times and camera sensitivity.
Assuming that (1) doesn't apply, and assuming that (2) doesn't happen because you're trying to combine a set of similarly exposed images, then you probably need to play a little with the binarizing threshold parameter. Try reducing it.
Edit: Actually, HDRComposition is giving you a clue of what's happening:
y0 = +0.001178 + 0.883969·x0
y1 = +0.000447 + 0.898127·x1
y2 = +0.000641 + 0.876338·x2
These linear fitting coefficients are close to 0.9. This means that the second image in the set is very similar to the first one (only a 10% of the second image is being used in the final HDR composite image). Then the third image will probably have coefficients very close to one, which triggers condition (2) above.