multiscale linear vs median?

DrJimSok

Well-known member
I'm wondering if there isn't some documentation regarding what MLT and MMT are doing under the hood. I understand wavelets transforms at a reasonable level, so that's not the problem, but once a multi-resolution transformation is accomplished, we end up with an image of coefficients... positives and negatives... then what is linear about MLT and what is median about MMT? Is there a definition of what wavelet basis is being used in this / these... maybe not so important, but would be nice to know??

So even w/o a complete exposition of the algorithm(s)... then a more heuristic understanding of the linear vs median nature of these tools might help me understand where one is likely better than the other, how/when to apply, and stuff like that...

Jim
 
Very good question. I have to admit I have used these somewhat randomly. Explanations would be great.
 
the blurbs juan wrote back when these tools were released are brief but at least it's something:


rob
 
I found these tutorials about MMT:

https://www.pixinsight.com/tutorials/mmt-noise-reduction/ (adaption from https://pixinsight.com/forum/index....diantransform-noise-reduction-example-1.3427/ ) and

Bernd
 
OK... this has all been very good inputs. I've read through most of the posts about MLT and MMT and think I'm beginning to understand.

Both start w/ a wavelets decomposition of the image, using some basis / family of wavelets (I don't know enough about the various bases to understand the relative merits of them)... and it probably doesn't matter at this level what the choices are... they just work... which aint bad...

This produces an image of wavelets coefficients, which indicate how much of any given wavelet contributes to the image at and around that pixel. The coefficients can be positive or negative and probably nicely distribute around zero... but maybe there's a bias in one direction or the other based on image properties...??

ASSUMPTION: legitimate structure in the image is represented in the transform coefficients by larger absolute magnitudes... if this isn't correct, then my brain is in trouble... can someone confirm??

So in either method (MLT or MMT), the coefficients that are below the threshold value (which is given in MAD units, which is good), are replaced by calculated values, (I assume the calcuation only uses 'neighboring' coefficients that are above the MAD threshold value) then this is where the two methods diverge...

MLT seems to use a convolution around that 'bad' coefficient location (pixel) in the image, so uses some convolution kernel calculated over the local area... convolutions can seem to be complex, formaly being an integration over the function and the kernel, but they are linear so that conv(A + B) = conv(A) + conv(B) or conv(c * A) = c*conv(A) ... now I didn't know this, but these linear denoising or structure enhancements allow the Gibb's phenomenon so may be subject to significant ringing artifacts... hmmm... we learn something new every day...

MMT seems then to use a morphological median around that 'bad' coeffient location... this is a non-linear function... so 'not' subject to the Gibb's phenomenon... hmmm... no ringing !?? ... now I don't know if ANY AND ALL non-linear transforms (median is only the one used in MMT) cannot cause ringing, but that's a more mathematical discussion.... interesting but probably not useful in the present discussion

Can anyone on the PI team weigh in on what the neighborhood is over which the convolution and median are calculated in MLT and MMT respecively? Also, what kernel is used in MLT (I assume if I know the size for MMT then it's almost certainly a circular structuring element in the morphological operator)...

OK... so to be honest... I'd be willing to bet my descriptions above are closer to the true algorithms than my previous understandings, but I'm sure I'm wrong at some level(s)... If anyone wants to further edgumacate me on this that'd be great. I'm one of those folks who is EXTREMELY forgetteful, so it takes me a LONG time to learn w/o some understanding of what I'm working with... but if I understand the concepts underlying anything, that stuff stays with me forever and I can use those to figure out how to work things...

OK... then... what's the current understanding of the proper roles of the two tools? I kinda thought MLT was preferred for noise reduction and I've even been using it to enhance structures... but maybe that's not right... maybe I / we should migrate to MMT for these purposes !! What about other uses like breaking an image into ones containing only certain structure ranges ?

Thanx everybody for weighing in on this and helping me understand. Now for the fun stuff... I'm gonna take my latest image and test the two tools and see if the results conform with my new concepts and understandings !!

CYa
Jim
 
Been doing some tests... using same threshold and amount values (I use amt = 1 for the first layer, then tend to go down to 0.85 ish for layer 2 and still lower for layer 3, then stop at layer 3...) for both MLT and MMT... the obvious 'salt n' pepper' noise that can be seen in the original image is nicely removed in MLT... in MMT, the largest of the single pixel salt n' pepper noise pixels end up not being removed, so yield isolated 'hot' and 'cold' pixels in the MMT images. I can remove these by cranking up the 'adaptive' control in MMT, which is apparently what that control is for... but it seems weird... like those extra hot / cold salt n' pepper pixels are within the threshold MAD value for MLT (so nicely removed), but not within the threshold for MMT (in which they end up standing out like sore thumbs).... they can't be both within and outside the threshold, so this is weird... clearly something I don't understand... clearly my understanding of the algorithm isn't correct...

It would be very nice to have an option w/ both MLT and MMT to 'see' the coefficient image and/or the MAD image so one could probe MAD values and know how best to set thresholds... maybe even a histogram of these MAD values... as I posted before wrt histograms in HistTrans and CurvesTrans, it would be nice to have a logarithmic scale option so one can better explore the full range of high and low values... linear scale tends to hide the low values unless you blow it way up... but then you lose the high values...

FYI... in both these tests, I used a reasonably nice stacked R-Band image, no mask was used, and there were locations of both background and real structure

Thanx
Jim
 
Jim,

When I tried to explain how MLT/MMT worked on a phemonological level, I created a gradient structure field using the PixelMath expression:

sin( 0.5 * (X()*180/pi())^2) + sin( 0.5 * (Y()*180/pi())^2)

(Create a large image and apply this function to it. You can change the "0.5" coefficient to vary things.)

Then you can apply MMT and MLT to this image and see at each scale and brightness the effects.
(This demonstration is in my video tutorial stuff.)

I don't know if this is useful for your examination...but you might find it entertaining?



-adam
 
Been looking more into what the MMT might be doing... plus reading some stuff that gives hints or indications, but aren't definitive. I now think the MMT might perform its 'multiscale' decomposition using morphological operations (tophat seems most obvious, which can be either a white tophat or a black tophat, which is either the difference between the original image and its opening / closing ... for one or the other, but I might have that backward... not terribly relevant). So it might be that MMT is NOT using wavelets... These tophat transforms tend to leave structures that are both smaller than the structuring element (the small shape called 'Structuring Element' in PIs Morph XForm) and brighter / darker than the surroundings.

I've used morphological stuff a lot in Remote Sensing work to detect various stuff on the Earth's surface and they are very powerful, but I've never used one this way...

I guess if this is the case, then MMT would use a cascaded series of Structuring Elements of dyadic scaling sizes to create images (maybe two of them... one lighter one darker) along w/ an indication of how much lighter / darker. Then the threshold value allows isolation of small absolute "MAD" values for further processing (noise reduction or structure enhancement).

Again, noise structures would have small MAD values, whereas true image structures would have larger... we can test this, as pixel math and the morphological transform processes can be made to replicate the multiscale decompositions at various scales...

Jim
 
Back
Top