NormalizeScaleGradient: Bookmark website now!

Status
Not open for further replies.
If you still have problems, I would recommend contacting @Juan Conejero .

John ....
I have now solved my problems :cool:
I could not log into the system in order to download the latest version of PixInsight because I had not told "HQ" that I had changed my email address!

All is good now ... so thanks for your time anyway.
Mike
 
Hi PI script users,
I wanted to share some of my experience with the wonderful new NormalizeScaleGradient (NSG) script developed by John Murphy.

I had 100 x 600 sec raw images of M101 with poor and varying background typical of my urban Hong Kong site. Imaging altitudes were from about 30 to 55 degrees. Taken over several nights. Honestly I had my doubts anything good could come out of it.

The below is not comprehensive, and the very clearly written documentation file (click at bottom of main NSG script dialogue box) is helpful to understand proper useage. Also Adam Block Studios has a series of videos (in the Horizons subscription area) that are very helpful.

I Blinked the images and found the best reference image, which did have a big gradient, but flatter than others. I then selected 7 (worse) images to try and set up the NSG script. It does not take long to run 7 images, but a long time for 100.

A powerful within the script that John Murphy developed is the Contour Plot. This is where you can see the plotted points, and the calculated correction countour line. See attached RGB contour. At the bottom of the dialoge box (cut off in my attached) are sliders where you can adjust the position of the plot. Areas with no dots probably go through something (like a big star or the core of a galaxy that is rejected).

The default Contour Plot position is through the middle of the plot. If you rerun the script with different smoothness settings you see how closely the contour follows the data. For my data, I lowered the data from default 2.0 to 0.8 shown in order to adequately follow the data points. The better the contour lines follow the data the more uniform (from image to image) the background normalization. Then DBE on the integrated image will be less troublesome. The lower the smoothness setting, the more computations required by the program. For 100 images, it required over 1 hour on my Dell XPS15 laptop.

After image integration, DBE, and a host of post calibration steps, I could pull out the data the attached images. My meager setup is a Zenithstar Z103 APO at 560mm using a QHY168C cooled camera. Only an IR filter in use. My skills in post processing are not great, but getting better as I learn from Adam Block Studios videos which tell the story behind the processes, and has complete processing examples.

In Sample Generation Plot you see all the reference data points, and the areas kept clear of points. One impressive feature is the ability to zoom in/out with a scrolling mouse and not get lost what you want to scroll in to see. Much better than zooming normal PI views. Try it.

I highly recommend you try the NSG script, even if you do not have varying gradients. It does a good job for providing an accurate weighting factor to be used during ImageIntegration.

Thanks John for this great script! Thanks Adam for your great video series and some insights for using.

Roger
 

Attachments

  • LRGB  Crop 1 to 2.jpg
    LRGB Crop 1 to 2.jpg
    277 KB · Views: 105
  • NSG Auto plus Manual Rejection stars and areas.jpg
    NSG Auto plus Manual Rejection stars and areas.jpg
    622.9 KB · Views: 114
  • NSG Countour Plot.jpg
    NSG Countour Plot.jpg
    500.2 KB · Views: 111
  • NSG Photometry Plot.jpg
    NSG Photometry Plot.jpg
    473.9 KB · Views: 116
  • Photometry Stars of Selected Image.jpg
    Photometry Stars of Selected Image.jpg
    536.7 KB · Views: 109
  • Reference Image Readout.jpg
    Reference Image Readout.jpg
    590.8 KB · Views: 117
  • LRGB no crop resize.jpg
    LRGB no crop resize.jpg
    243.3 KB · Views: 104
Hi PI script users,
I wanted to share some of my experience with the wonderful new NormalizeScaleGradient (NSG) script developed by John Murphy.

I had 100 x 600 sec raw images of M101 with poor and varying background typical of my urban Hong Kong site. Imaging altitudes were from about 30 to 55 degrees. Taken over several nights. Honestly I had my doubts anything good could come out of it.

The below is not comprehensive, and the very clearly written documentation file (click at bottom of main NSG script dialogue box) is helpful to understand proper useage. Also Adam Block Studios has a series of videos (in the Horizons subscription area) that are very helpful.

I Blinked the images and found the best reference image, which did have a big gradient, but flatter than others. I then selected 7 (worse) images to try and set up the NSG script. It does not take long to run 7 images, but a long time for 100.​

A powerful within the script that John Murphy developed is the Contour Plot. This is where you can see the plotted points, and the calculated correction countour line. See attached RGB contour. At the bottom of the dialoge box (cut off in my attached) are sliders where you can adjust the position of the plot. Areas with no dots probably go through something (like a big star or the core of a galaxy that is rejected).

The default Contour Plot position is through the middle of the plot. If you rerun the script with different smoothness settings you see how closely the contour follows the data. For my data, I lowered the data from default 2.0 to 0.8 shown in order to adequately follow the data points. The better the contour lines follow the data the more uniform (from image to image) the background normalization. Then DBE on the integrated image will be less troublesome. The lower the smoothness setting, the more computations required by the program. For 100 images, it required over 1 hour on my Dell XPS15 laptop.

After image integration, DBE, and a host of post calibration steps, I could pull out the data the attached images. My meager setup is a Zenithstar Z103 APO at 560mm using a QHY168C cooled camera. Only an IR filter in use. My skills in post processing are not great, but getting better as I learn from Adam Block Studios videos which tell the story behind the processes, and has complete processing examples.

In Sample Generation Plot you see all the reference data points, and the areas kept clear of points. One impressive feature is the ability to zoom in/out with a scrolling mouse and not get lost what you want to scroll in to see. Much better than zooming normal PI views. Try it.

I highly recommend you try the NSG script, even if you do not have varying gradients. It does a good job for providing an accurate weighting factor to be used during ImageIntegration.

Thanks John for this great script! Thanks Adam for your great video series and some insights for using.

Roger
Thank you for your positive feedback!

I am about to release version 1.1, which will have the following improvements:
  • Provide a new 'ImageIntegration' option. If selected, after exiting NormalizeScaleGradient, the ImageIntegration dialog will be displayed. It is initialized with both the normalized images, and the correct settings for integrating fully normalized images. Note that the user still needs to specify the other settings (for example, the 'Rejection algorithm'). The user can either use it directly or save it to a process icon for later use.
  • Allowed more space for the 'Altitude' column to fix possible display problems on MacOS or large screens.
  • Improved the image weight calculation. Thanks to Adam Block for reporting the issue.
  • I have also implemented some user interface changes suggested by Adam Block:
  1. Moved the 'Target images' section above the 'Reference image' section because the 'Set reference' button (Reference section) is used after loading the target images.
  2. Highlighted the reference image in the 'Target images' table by using a green italic font.
  3. When creating a process icon, I now also save the target image selection.
I am currently updating the documentation.
Regards,
John Murphy
 
Thank you for your positive feedback!

I am about to release version 1.1, which will have the following improvements:
  • Provide a new 'ImageIntegration' option. If selected, after exiting NormalizeScaleGradient, the ImageIntegration dialog will be displayed. It is initialized with both the normalized images, and the correct settings for integrating fully normalized images. Note that the user still needs to specify the other settings (for example, the 'Rejection algorithm'). The user can either use it directly or save it to a process icon for later use.
  • Allowed more space for the 'Altitude' column to fix possible display problems on MacOS or large screens.
  • Improved the image weight calculation. Thanks to Adam Block for reporting the issue.
  • I have also implemented some user interface changes suggested by Adam Block:
  1. Moved the 'Target images' section above the 'Reference image' section because the 'Set reference' button (Reference section) is used after loading the target images.
  2. Highlighted the reference image in the 'Target images' table by using a green italic font.
  3. When creating a process icon, I now also save the target image selection.
I am currently updating the documentation.
Regards,
John Murphy
John

The improvements sound great. I have used the tool twice now on images taken during summer twilight and it works wonders for starry fields (imaging two LDN objects in Vulpecula).
Would it be possible to have the option to create a text file or csv listing the frames and their NWEIGHTs? More as a matter of interest and reference as I use the keyword in the Image Integration process in any case. Or maybe as final output after all the images and their weights are listed highlighting the highest calculated NWEIGHT?
Thank you

Roberto

First result using the tool here: LDN807 plus others

xTXGAkL9Hsvy_16536x0_85166Htd.jpg
 
Last edited:
John

The improvements sound great. I have used the tool twice now on images taken during summer twilight and it works wonders for starry fields (imaging two LDN objects in Vulpecula).
Would it be possible to have the option to create a text file or csv listing the frames and their NWEIGHTs? More as a matter of interest and reference as I use the keyword in the Image Integration process in any case. Or maybe as final output after all the images and their weights are listed highlighting the highest calculated NWEIGHT?
Thank you

Roberto

First result using the tool here: LDN807 plus others

xTXGAkL9Hsvy_16536x0_85166Htd.jpg
Version 1.1 displays the following summary on completion:
Summary.png

It should be possible to copy and paste to create a csv file.
I will try to finish it and get it released very soon!
John Murphy
 
Excellent script John. Thank you! I have a question about the weight keyword generated by the script.

WBPP writes a weight keyword (WBPPWGHT) to registered files.
NGS writes a different weight keyword (NWEIGHT).

Which should be used when integrating the _nsg files?

Thank you!
 
Excellent script John. Thank you! I have a question about the weight keyword generated by the script.

WBPP writes a weight keyword (WBPPWGHT) to registered files.
NGS writes a different weight keyword (NWEIGHT).

Which should be used when integrating the _nsg files?
The noise evaluated weight is highly dependent on the brightness scale factor. By using star photometry, NSG can measure the scale factor very accurately. I would therefore strongly recommend using the noise evaluated weight from NSG. After each target file has been normalized (the accurate scale factor has been applied), the script then use the PixInsight MRS noise evaluation. This produces a very robust result.

If you have a set of images that were affected by clouds, compare the noise from good and bad images. You should find that NSG always assigns a lower weight to the bad images. Compare with other processes that assign a weight. You might be surprised by the results!

I have improved the weight calculation in v1.1 to have an even higher accuracy, which is coming very soon.
John Murphy
 
Last edited:
Where can I find this script? I am using the latest version of PI (1.8.8.8.) and I am not able to find it.

Maybe I have to add a new repository to the default list, I had to do it to get the EZ suite.

Thanks,
 
Where can I find this script? I am using the latest version of PI (1.8.8.8.) and I am not able to find it.

Maybe I have to add a new repository to the default list, I had to do it to get the EZ suite.

Thanks,
It should be in the SCRIPTS > Batch Processing menu, just above WBPP
 
I am still testing, but this is likely to be the final version 1.1
Regards, John Murphy

[Updated to v1.1 Beta 2, a very minor change to console output.]
 

Attachments

  • NormalizeScaleGradient.zip
    101.5 KB · Views: 160
Last edited:
I am still testing, but this is likely to be the final version 1.1
Regards, John Murphy

[Updated to v1.1 Beta 2, a very minor change to console output.]

Hi: how do I get this version into Pixinsight? I've tried downloading to scripts but when I start pixinsight I'm still seeing v1.0?

jeff
 
Hi: how do I get this version into Pixinsight? I've tried downloading to scripts but when I start pixinsight I'm still seeing v1.0?
Install
Unzip the script to a folder of your choice.
In the PixInsight SCRIPTS menu, select 'Feature Scripts...'
Select 'Add' and navigate to the folder.
Select 'Done'
The script will now appear in 'SCRIPTS > Batch Processing > NormalizeScaleGradient'
 
Hi @jmurphy, congratulations on this impressive work!

I have a question regarding the NWEIGHTS values computed by the script. As far as I understood, the weights are computed by these lines:

JavaScript:
let noiseRatio = refNoise / noiseMRS[0];
let weight = Math.pow( noiseRatio, 2 );

where refNoise is the noiseMRS of the reference frame. This means that the weight of each image is k/var() where k = refNoise^2, and this recalls me the weighting functional used by ImgeIntegration here (formula [13]), if we integrate the nsg-correted images and no scaling is applied then the weighting formula becomes 1/var() for each frame, so the same that NSG uses except a global scaling factor given by refNoise^2 (which has no impact on the final integrated image since weights are normalized).

I would expect, then, that integrating the nsg-corrected frames using the ImageIntegration's weigithing strategy based on the noise estimation leads to the exact same result of using the NWEIGHT values (in short, the weights in the two cases should be the same). In the end, once the images have been nsg-corrected, the sub-optimal weithging by noise could be applied without any scaling factor, which looks to me being what NSG script does.

Me and @ngc1535 have quickly did this test but we've found two slightly different weighting values on the same frame.
So, my question is: is this unexpected? or maybe I misunderstood something and we do have to expect different weights between the two methods?
 
Last edited:
Hi @jmurphy, congratulations on this impressive work!

I have a question regarding the NWEIGHTS values computed by the script. As far as I understood, the weights are computed by these lines:

JavaScript:
let noiseRatio = refNoise / noiseMRS[0];
let weight = Math.pow( noiseRatio, 2 );

where refNoise is the noiseMRS of the reference frame. This means that the weight of each image is k/var() where k = refNoise^2, and this recalls me the weighting functional used by ImgeIntegration here (formula [13]), if we integrate the nsg-correted images and no scaling is applied then the weighting formula becomes 1/var() for each frame, so the same that NSG uses except a global scaling factor given by refNoise^2 (which has no impact on the final integrated image since weights are normalized).

I would expect, then, that integrating the nsg-corrected frames using the ImageIntegration's weigithing strategy based on the noise estimation leads to the exact same result of using the NWEIGHT values (in short, the weights in the two cases should be the same). In the end, once the images have been nsg-corrected, the sub-optimal weithging by noise could be applied without any scaling factor, which looks to me being what NSG script does.

Me and @ngc1535 have quickly did this test but we've found two slightly different weighting values on the same frame.
So, my question is: is this unexpected? or maybe I misunderstood something and we do have to expect different weights between the two methods?
I like the level of detail you have gone to!
  1. You are correct; I am using exactly the same weight formula as ImageIntegration.
  2. Yes, I also use the same PixInsight method to calculate the noise, noiseMRS. I believe that this PixInsight method does a great job of estimating the noise within an image
  3. Using noiseMRS on its own is not sufficient. The image (brightness) scale factor is crucial. If an image is scaled by a factor of 2, the measured noise will also increase by the same factor. Since the image weight is proportional to 1 / noise^2, an invalid scale factor will also have a very strong impact on the calculated weight. In NSG I have already applied the scale factor to the normalized images, so I don't need to include the scale factor in the equation (that would apply it twice).
  4. Yes, NSG does produce different weights to ImageIntegration. Under some conditions they can be very different.
So why does NSG produce a different result to ImageIntegration? It is all down to how the scale factor is calculated. NSG uses stellar photometry to calculate the scale factor. I believe that this produces a robust and accurate scale factor.

Without using stellar photometry, accurately calculating the scale factor is surprisingly difficult. For example, consider a pixel in the reference and target image (at the same x,y coordinate). How much of the brightness difference is due to the offset (addition) and how much is due to scale (multiplication)? At a pixel level it is totally ambiguous. To estimate the scale factor, it is necessary to look at many pixels - usually the whole image. Unfortunately, due to gradients, differing star profiles, background sky brightness and different levels of noise, there is no perfect way of doing this. There is nothing wrong with the approach ImageIntegration uses, it is simply trying its best to solve an ill-posed problem.

The errors ImageIntegration's normalization makes in calculating the scale really do matter. They do not cancel out because the error is likely to be different from image to image.

You will not be surprised to learn that Juan Conejero was already aware of these issues. After I had finished NSG, I learnt that he had been planning to write a normalization method based on stellar photometry. I just beat him to it. This is allowing him to get on with other tasks - he always has a lot on his todo list!

Regards, John Murphy
 
Last edited:
Status
Not open for further replies.
Back
Top