I tried the neat CosmeticCorrection Module only, since I was in mid-course on some processing when I found this thread and watched Harry's new video, but I have the following question:
How does one determine the "right" number of hot pixels to remove in operating the sliders in the Module?
I thought that would be simply a matter of using the module to count the number of hot pixels in the reference MasterDark and adjust the slider by whatever means to get roughly that count. However, all the choices I tried had me ending up using 2.5 to 3 sigma on "autodetect" since it seemed to make visible hot pixels in a sample imaging sub go away (like in the movie). That must have been way too many false hot pixels, however, since when the RGB was later combined I ended up with a lot of random uni-color stars suggesting a lot of those 'hot pixels' I removed must have been, in fact, faint, smaller stars (unless I removed to few and suddenly got an additional batch of hot pixels showing up). Any suggestions for a reliable way to gauge those settings, as I obviously didn't find one ?
I previously have used CosmeticCorrection SCRIPT with its default settings and then hand-edited; i.e. cloned, the several (5-20, say) residual hot pixels out of the L, R, G, B images before continuing by comparing them to a test RGB to spot the uni-colored stars ). The video gave me the idea that the MODULE would do a better job. I know I really should be using dithering to reduce this problem in the first place (on deck to happen).
Thanks very much.
-Jeff