Good Afternoon,
I am currently working on a University project which involves writing software to process images taken by an all-sky camera. I am implementing various steps to try and improve image quality by removing background noise and preserving stars as much as possible. Visually it is easy to judge if a processing step was successful but for my report I will need to include some automatically calculated measurement to prove that the image quality has improved i.e. some sort of signal to noise ratio.
I have been playing around with the ‘Noise Evaluation’ script which does give me a nice figure to compare but my problem is that I don’t really know if it is suitable? Typical values are:
?K = 7.947e-004, N = 810826 (56.01%), J = 4
I read in one of the posts that the higher the percentage of the image used for estimation the better the accuracy. Is a range of 50 to 60% ok or too low? A typical image can be found here in case anyone has any suggestions on how to best quantify the image quality:
https://mpoelzl.exavault.com/share/view/cr6a-c2nqibcjThe other part of this is that I don’t want to lose any signal data i.e. stars when enhancing an image. When I run the FWHM script it does output the number of found stars in the process console:
StarDetector 1.23: Processing view: Converted_IMG01758
851 star(s) found
0.542 s
Is this a good value to use for star count (the idea being to confirm that the star count remains the same pre and post processing)? Is it possible to run the StarDetector directly?
Any input would be appreciated!
Thanks,
Mike