Hi iceman,
I have made a couple of tests with your raw images in PixInsight.
I will show just the luminance of the images, since the originals include no relevant information in terms of color. This is mainly because the red channel is almost void. This is probably due to incorrect settings in the video capturing software for the webcam. Did you use factory default settings, or did you adjust color balance manually?
Secondly, the original images are poorly sampled for a planetary subject. This is because the image scale, measured in arcseconds per pixel, is very small to achieve an appropriate rendition of significant planetary features. In other words, you'd need absolutely perfect seeing for one or two pixels to represent a small planetary feature consistently over a significant portion of the frames in your video. So try using a higher magnification, e.g. a barlow lens, the next time.
For the reasons above, don't expect miracles! :roll: Having said that, these are my tests:
1. Original luminance at 100% size (your first stacked image).
2a. A test with wavelets at the original size.
2b. This is a screen copy showing parameters for the above image. The default 3x3 Linear Interpolation scaling function has been used. Care has been taken to avoid generating artifacts due to noise intensification. This is why you see noise reduction and deringing parameters.
3a. Since the original is poorly sampled, it seemed a good idea resampling it up prior wavelets processing. I resampled the image above (1) to 230% its original size using bicubic interpolation. Why 230% and not just 200%, for example? Because if we just double or halve the size, we are simply moving image structures between dimensional scales in a powers-of-two scheme (2,4,8,16...). In general, this does not increase our chances for a better separation of the image into different scale levels.
3b. This is my try with wavelets. The 3x3 Small Scale scaling function was used. Note how this time the first three wavelet layers can be discarded; they contain just noise. The scaling function used is better to split the images into many small-scale wavelet layers. This was necessary this time because noise was present at many different layers, so a better separation helped reducing it, while sharpening the image at the same time.
3c. Finally, these are the parameters used. Don't forget that the scaling function is 3x3 Small Scale.
Of course, one can go further than the above results, if desired. I've tried to extract what I think is there, without introducing artifacts due to the noise. Of course the parameters shown are by no means the only possible ones, neither the best ones.
Hope this helps.