how is MultiscaleMedianTransform related to Deconvolution. Does it achieve its result by assuming some PSF (point spread function)? Or is it useful for a different kind of blurring only?
MMT does not use a PSF or any similar information about the image. It is a mathematical construct similar to a wavelet transform or a Fourier transform. As the wavelet transform, the MMT decomposes the image into a set of detail layers plus a residual layer, following a scaling sequence that, unlike the WT, can be arbitrary or even an irregular sequence (the WT is only defined to use a dyadic sequence).
MMT is useful for two types of tasks:
- Image sharpening. This is achieved by multiplying all coefficients in one or more transform layers by a constant k > 1. For this task the MMT is well suited for sharpening at small scales. It does not work correctly at medium and large scales due to accumulated changes in the morphology of large-scale structures. Two of MMT's properties make it particularly well suited for this task: it does not generate ringing artifacts, and it is able to isolate small-scale structures within a single layer. Note that the wavelet transform does not have any of these properties.
- Image denoising. This is achieved by thresholding the coefficients in one or more layers and multiplying nonsignificant coefficients by a constant 0 <= k < 1. The MMT is very good at this task due to the second property I mentioned above: its ability to isolate small-scale structures within a single layer. In our implementation we have added a local adaptive noise reduction filter to remove small-scale noise structures that cannot be selected by thresholding (outliers).
Do you have a reference on the WWW that helps to understand what is going on in this module?
Fortunately the second reference that I gave is fully available online:
http://www.multiresolution.com/cupbook.htmlOn the page above you can download free PDF versions of the first two books by Starck et al. (they are preprint versions but the differences with the printed books are just cosmetic). Download the first book (Image Processing and Data Analysis: The Multiscale Approach) and jump to section 1.5 (page 45), where you have the multiresolution median transform (which we prefer to call multiscale instead of multiresolution) described. In theory it is a very simple algorithm, but a useful implementation is not.
In our implementation we have introduced several changes:
* The median filter cannot be implemented with a square structuring element. Such a naive implementation has very little applicability in practice. The reason is simple: with a square structure we generate artifacts around every image structure that does not look like a perfect square. The ideal structure for any kind of images, but especially for most astronomical images, is a perfect circle. However we have two problems:
1. We cannot represent a circle accurately with small kernels of 3, 5 and 7 pixels. Not below 11 or 13 pixels we can start achieving reasonable renditions of circular structures. Our solution has been using multiway structuring elements. A multiway structure consists of two or more substructures that apply a morphological operator with specific spatial orientations and shapes. The result is the same operator applied to the partial results from each substructure. After some experimentation work we have been able to approximate circles quite well with just two ways, even for 3x3 filters. We still have to explore this in more depth, though, and I hope future versions of this tool will be much better in this regard.
2. Arbitrarily shaped, multiway structuring elements cannot be applied, to our knowledge, using accelerated median filtering algorithms. We are restricted to naive O(n^2*N) nonseparable implementations to apply median filters with accurate control over the shapes of the structures.
* Note that problem #2 above leads to an impractical implementation even for moderate filter sizes of about 15x15 elements. For 25x25 and larger filters the exponential growth of calculation times makes this task unmanageable. For this reason we have modified the algorithm to include
decimation: the image is downsampled successively at growing scales to keep the size of the median filter constant. However, again this is a naive approach that does not work for a practical implementation: if we reduce the image to compute small scales (say scales below 8 pixels), then interpolation errors and ringing induced by interpolation kill us completely. In our implementation, we don't reduce the image until the required median filter has more than 11 elements. For structures up to 11x11 we use accurate multiway median filters, and starting from 13 elements and above we downsample the image proportionally to use a 9x9 structure, apply the median filter, and upsample the filtered image to its original dimensions, so the transform is still redundant at a high level. To reduce interpolation errors we use Mitchell-Netravali cubic filter interpolation for size reduction and cubic spline interpolation for upsampling. Interpolation contaminates the algorithm with a small amount of convolution (=ringing), but this only happens at relatively large scales and its effect is negligible in general. Again, we'll improve this part of our implementation in future versions.
You said you want to know...