Hi Jose (if you prefer spanish, let me know)
Regarding ATrousWaveletTransform, I was surprise to see that if I split an image into low frequency (64 pixel layer) and high frequency (32 pixel layer and up) components and then I add these 2 components using pixelmatch the tool produces a different image from the original. I am not talking about rounding errors, I am talking of visible differences.
First, a theorical background: The À Trous wavelet transform is what is called a redundant function. This means, that each scale layer has the same dimensions as the original image, instead of storing "wavelets coefficients". This way, the inverse ATrousWavelets transform is simply the sum of every layer.
Now, internally each layer is a new floating point image, with positive and negative values. If you design a PCL process module, or PJSR script, sepparating small scales and large scales in two different floating point images with ATWT and then adding them together should yield the same image as result as the original one.
Now, with the ATWT process "frontend", something different happens. This tool has been designed for image enhancement, not mathematical handling. It tries to output data das has visually a meaning (i.e. in the 0 to 1 range). Everything outside this is clipped, truncated. So, if you disable some layers, you'll get a los of zeroes (and some ones, 1s, maybe) that represent data loss. Because of this, simply adding the results of disabling some layers won't work.
BTW, if you preview a particular layer, you'll see that it has an offset, to propperly represent negative values on the screen. This way, 0.5 is in fact 0.

The workaround: If you want to get only the first 4 layers in one image, for example, then you must do the opposite: disable them, and create a large scale image, since it is much less likelly that you'll get data loss there. Do that to a clone of the original image. Now, using PixelMath, subtract the large scale image to the original one. If you want to keep all data intact, but don't care too much about pixel value shifts or scalations (additive and multiplicative effects), then rescale the image. If you want to have full control over the data, create your own shift and amplification, disabling the rescale option. A third option is to create two high frecuency (small scale) images, one with the bright features, and another with dark features. Just don't rescale, and invert the order of the operant images. Then, to reconstruct the original image do this: Large+Small_bright-Small_dark and you have not to worry about shifts or other factors.
Hope this helps.