Hi David,
As Carlos says, all is perfectly normal. If you perform a wavelet decomposition with just one detail layer, the first layer and the residual layer should be identical, no matter the scaling sequence (ss in your code), as long as you don't change the scaling function (k in your code).
This is logical. Consider that the first wavelet layer contains only structures with characteristic sizes less or equal to one pixel. These structures are invariant to the scaling sequence.
The true fun starts with two or more layers. Then the scaling sequence determines the distance in pixels between two successive wavelet scales.
Now, I must caution you against using linear scaling sequences (that is, when ss > 0), especially with large values (say more than 6 to 10 pixels). This is allowed in our implementation, and indeed works correctly in the sense that you can perform an inverse transform (wavelet reconstruction) to get the original, untransformed image. However, if you modify or suppress a wavelet layer obtained with linear scaling, you can get some artifacts in the reconstructed image. You can check this very easily with the standard ATrousWaveletTransform process. The artifacts manifest as increased brightness following horizontal and vertical regular patterns.
The reason for this behavior is that the à trous discrete wavelet transform algorithm utilizes a dyadic scaling sequence (1,2,4, ..., 2^n) to decompose the image into wavelet layers. If we use a different sequence, we are not performing a true à trous transform. As long as we don't modify any wavelet layer, the algorithm is perfectly reversible using any scaling sequence, because it is a purely linear transform with respect to scale (say size in this context). However, if we modify a single wavelet coefficient with a non-dyadic sequence, the reconstruction algorithm is no longer working as such, but as an approximation. These approximations work pretty well as long as we don't depart too much from the true dyadic sequence.
The linear scaling feature in our implementation (which AFAIK is unique) is useful to process high-resolution planetary images because it allows us to perform decompositions into scales of, for example, 1,2,3,4... or 1,3,5,7,9,... pixels, which allow for a better separation of small structures. With these sequences, or even somewhat larger ones, there is no problem, or just negligible errors, because they are reasonably similar to a dyadic sequence, so the approximations are quite good. However, with larger sequences (80 pixels is way too large) you're likely to get artifacts if you change some wavelet scales.
Now your second question. The wavelet detail layers (that is all layers except the last residual one) are not true images. They contain wavelet difference coefficients that can take positive, zero and negative values. To show a detail layer as an image, you must first rescale all coefficients to stay within the normalized [0,1] range. The easiest (although not necessarily optimal) way to achieve this is by automatically rescaling it with the Image.rescale() method. In general, when you visualize a wavelet layer in this way you see a weird image with very low contrast. Another way to display wavelet layers is to assign false colors to several ranges of coefficients. For example, you can display positive, zero and negative coefficients using proportional brightness levels of different colors.
<jokes>
#include <pcl/Thread.h>
class MyProblemGenerator : public Thread
{
virtual void Run()
{
while ( true ) puts( "problems" );
}
};
MyProblemGenerator* noProblem = new MyProblemGenerator;
noProblem->Start();
Sleep( 1.0 );
noProblem->Kill(); // I can kill your problems :)
delete noProblem, noProblem = 0;
<disclaimer>
Do not use Thread::Kill() for serious purposes. Don't kill your threads. They don't deserve that.
</disclaimer>
</jokes>