NormalizeScaleGradient: Bookmark website now!

Status
Not open for further replies.
I am also thinking about adding an optional 'fast' mode to NSG. This would reduce the number of stars it used to determine the scale (but still enough to be accurate) and speed up the gradient correction by only correcting the dominant gradient. This would normally be sufficient for narrow band images, and may also often be sufficient for wide band if the field of view is less than 1 degree. Would anyone find this useful?

Regards, John Murphy
John,
Speeding up the NSG processing is good, and it seems those images with minimal variations in gradients between images this would be applicable. If the user examined his images and then lowered the gradient smoothness from 2 to perhaps 1, then it seems more stars would be needed to detect and fix the increased gradient. I think irregardless of >1 deg or <1 deg. Maybe my understanding of the speed up method is wrong.

My best practice is to blink the images and visually find the best (least complex) and worst gradient images. Best is the reference, then select the worst and look at the gradient graph and adjust the gradient smoothness correction for that image. Yes, NSG run time is longer, but the result is the importance, not the run time.

My gradients are mostly cloud based, and not so much elevation based. So I suspect both <1degree and >1degree would be less applicable. I am in the >1 deg category, so the processing reduction you mention would not apply. Again, my understanding how the speed up method would work may be (probably?) wrong.

I am hopeful star photometry can be worked into image integration as the standard for weighting images. Noise evaluation has been shown clearly and factually several times by Adam Block to be tricked by high median backgrounds. My typical background fits into this category. I always use your NSG script, so really that is a non issue for me as NWEIGHT is input into image integration.

John, I have learned more from you about "noise" in the past 6 months, than in the past 5 years of imaging. Thank you for sharing your expertise.

Roger
 
My best practice is to blink the images and visually find the best (least complex) and worst gradient images. Best is the reference, then select the worst and look at the gradient graph and adjust the gradient smoothness correction for that image. Yes, NSG run time is longer, but the result is the importance, not the run time.
Yes, that is good practice :)

My gradients are mostly cloud based, and not so much elevation based. So I suspect both <1degree and >1degree would be less applicable. I am in the >1 deg category, so the processing reduction you mention would not apply.
Correct, in these cases, a complex gradient correction will always be necessary.

John, I have learned more from you about "noise" in the past 6 months, than in the past 5 years of imaging. Thank you for sharing your expertise.
That's great to hear. Thanks! :)
 
This is fairly minor. When an NSG configuration is saved as a process icon, and then loaded into the script, the list of target files is not parsed correctly if there is a comma somewhere in the files' path. NSG splits each file path into two incomplete paths at the spot where the comma is. I don't know if there's a way to use delimiters to fix this, or maybe use a list of target files rather than stuffing them into one parameter? Or, maybe this is just a limitation of scripting?

John
I now store the target file names individually, as a list. This should solve the issue. The new version will be in the next version of PixInsight (-10), which should arrive very soon.
 
John,
After running NSG script, and the images sorted from high to low NWEIGHT, is there a way to judge if including a low NWEIGHT image will hurt or help the integrated noise level? I know this is not a simple question because there are many different situations (combinations) involving the number of images, and the range of noise for a particular set to be integrated will affect the answer.
I guess I am asking if images below a NWEIGHT exclusion point/percentage can actually hurt the final integrated image, and then how do you judge if the result was hurt or helped?
I am thinking to do a series of tests with different numbers/combination of low/high NWEIGHT images, but I just don't know how to judge the result.
Thanks,
Roger
 
With the SNR Max script, it tends to indicate that including all is better. However, I'm not sure how relevant that script is.
 
I took a run at doing the drizzle integration as described back a ways. Even with those settings I could not get many, many images to match stars. The slightest rotation seemed to throw it off. Has anyone gotten it to work with real world data?

Looking forward to when I can use it with Drizzle!!
 
I took a run at doing the drizzle integration as described back a ways. Even with those settings I could not get many, many images to match stars. The slightest rotation seemed to throw it off. Has anyone gotten it to work with real world data?

Looking forward to when I can use it with Drizzle!!
Yes, any significant rotation could easily be enough to make too many star matches fail because their separation was more than the maximum 'Photometry Star Search' -> 'Star search radius' value of 20.

There are some simple NSG - drizzle solutions that 'appear' to work, but depending on the method, either you don't get the promised extra resolution, or you lose the normalization that NSG applied. So unfortunately you need to wait for the C++ version. Until I finish the C++ version, there is no good way of making it work with drizzle.

I have used NSG on unregistered data, but this was from a single session so the rotations were insignificant. I did of course need to set the 'Photometry Star Search' -> 'Star search radius' value to its maximum of 20. I ran NSG this way to test the effect of registration on the photometry. Most registration methods involve interpolation, and these algorithms might not fully conserve star flux. It did occasionally make a measurable difference, but fortunately this difference was far too small to be a problem.

Regards, John Murphy
 
Indeed, thanks John, will wait. Maybe even patiently. ;)

This is probably worse for those with manual rotators (i.e. human hands involved) night to night, as we are probably more off on rotation than those with electronic rotators, or fixed installations.
 
1636740027848.png

This major new version is included in the new PixInsight 1.8.8-10 release.
I provide this software for free, and dedicate a great deal of time and effort to write, update and support it. To continue to do this, I need your help and support. If you find my software useful, please 'buy me a coffee' (a small donation) at https://ko-fi.com/jmurphy It will be really appreciated. Thanks!

Major changes include:
  • I now automatically save all the settings on exit. If you wish to return to the defaults, use the reset button at the bottom.
  • New 'Image scale' button (Reference image section). This can be used to override the focal length and pixel size stored in the FITS header.
  • Reference Image and Output Directory text boxes are now editable. Previously they were grayed out and could only be modified by using the folder buttons or the 'Set reference' button.
  • 'Auto exit' check box. If selected, the script will exit on completion. This was added to prevent accidentally restarting the process. This could happen because when the script finishes, the script dialog grabs the keyboard focus. The 'Run' button was the last button pressed, so this has keyboard focus...
  • Gradient Correction: Gradient graph. This dialog can now display an image of the reference/target frame:
1636741100328.png

  • First check the image for any ultra bright saturated stars that don't have a red circle around them (ignore medium and dim stars). On very rare occasions, a really bright saturated star can be missed by the star detection algorithm. In these cases, a gradient sample rejection circle should be added manually. This is done in the Sample Generation dialog.
  • Double click on the brightest star. Deselect 'Image' and check the graph. Due to scattered light, bright stars can produce sharp peaks or troughs in the graph that vary too quickly to be modeled accurately. The script automatically rejects gradient samples around these bright stars to prevent this. If the remains of a peak / trough can be seen, you need to use the Sample Generation dialog to increase the size of the rejection circles.
1636741673479.png

Deselect 'Image'. This graph shows the gap where gradient samples that were too close to the bright star have been successfully rejected. The original peak is no longer visible.

1636741805965.png


Select 'Image' to display the image again and double click on the brightest part of the galaxy or nebula. Deselect 'Image' to view the graph. Adjust the 'Gradient Smoothness' until the graph follows the gradient trend but not the noise. If the gradient appears to follow the galaxy's or nebula's image brightness, increase the gradient smoothness until the graph line follows the gradient trend instead. This will ensure artifacts are not introduced.

It is important to understand that the graphs are showing the relative gradient between the reference and target image. They are not showing the actual gradient. The purpose of this program is not to eliminate the gradient; instead, the corrected target images end up with the same gradient as the reference image. Also, the gradient sample points are not used to determine the background sky brightness. Instead, they measure the difference between the two images. For this reason, the samples do not need to avoid nebulae or galaxy spiral arms. For extended nebula, it is essential that samples do cover the nebula.

I think this covers all the major changes!
Regards, John Murphy

Script is now attached to:
 
Last edited:
John, there's much ado about the new PSF Signal weighting in the latest PI release. Does that in anyway impact how one would approach using NSG? Is using NSG and weighting on NWEIGHT still the best practice over simply using PSF Weight? Or would one consider using the normalized files from NSG with PSF Weight instead of NWEIGHT? Is PSF Weight just a better replacement for the old noise evaluation but NSG+NWEIGHT is better still?
 
John, there's much ado about the new PSF Signal weighting in the latest PI release. Does that in anyway impact how one would approach using NSG? Is using NSG and weighting on NWEIGHT still the best practice over simply using PSF Weight? Or would one consider using the normalized files from NSG with PSF Weight instead of NWEIGHT? Is PSF Weight just a better replacement for the old noise evaluation but NSG+NWEIGHT is better still?
'NSG-NWEIGHT', 'PSF signal' and 'PSF power' all use stellar photometry to measure the astronomical signal, but they use different approaches. Each method will give slightly different results.

Myself, and others, have thoroughly tested the NSG weighting algorithm, and as far as I know it has passed every test really rather well. This includes imaging through thin cloud, light pollution gradients, twilight conditions, moon light, different exposure times and varying airmass.

I haven't tested the new weighting algorithms yet, but I would expect them to also produce excellent results. If one of these algorithms is shown to equal or exceeds NSG in both reliability and accuracy, I will of course update NSG to use it.

So, in summary, I can't tell you which method is best. I think it is a case of try it and see...
 
Last edited:
NSG 1.4.4 is now available as a PixInsight -10 update.
1637322488996.png

I have added a new Images: text box (just below the target image table, on the left hand side). It displays the number of target images. This allows the user to check that they really have loaded all the files they intended to. I decided against adding a column that displays the nth file integer because the table columns are sortable.

To see more of the NSG updates included in the PixInsight -10 release, see message:

I have also updated the code to make use of the excellent new PixInsight feature that allows applications to save the process history in the saved .xisf file.
1637322864521.png
 

Attachments

  • NormalizeScaleGradient.zip
    109.6 KB · Views: 66
Hi John,

I'm not sure how useful this might be, but here goes anyway...

In another thread (about the latest release of PI and it's new noise algorithms) you were kind enough to explain their relationship, now and possibly in the future, to NSG. Since then I have processed an image of NGC6992 and had a chance to see the difference between the PSF Signal Weight and NSG's internal weighting. The answer seems to be not much! Which I assume is a good thing. :)

As I think I understand it, any subs that NSG thinks are "better" than NSG's reference file get a rating greater than the reference file. So the reference file gets a rating of 100 and anything above that is considered "better". What I did was select a reference file using "SS > Measurements table > PSF Signal Weight" list. As the screenshot below shows the weighting applied by NSG seems pretty much the same as PSF Signal Weight.

Screenshot 2021-11-17 130229.jpg


Cheers, Jim
 
Last edited:
Hi John,
Can you give a brief description how NSG develops the weight a sub if that sub has less number of photometry stars than other subs.

Thks,
Roger
 
John, there's much ado about the new PSF Signal weighting in the latest PI release. Does that in anyway impact how one would approach using NSG? Is using NSG and weighting on NWEIGHT still the best practice over simply using PSF Weight? Or would one consider using the normalized files from NSG with PSF Weight instead of NWEIGHT? Is PSF Weight just a better replacement for the old noise evaluation but NSG+NWEIGHT is better still?
I came to this forum today to ask the same questions. We will need to study. I am sure Adam is buzzing around studying this with some examples from his different data. Hopefully his insightful video will pop up shortly!?

Now if we just had a good way to measure (compare) which image of 2 (or more) integrated images of the same raw data is better. Each with different parameters selected in NSG or in Image Integration. Perhaps the new PSF Power can do it. Juan described in the release 1.8.10 announcement it as:
  • The new methods generate universally comparable weights. This means that, in principle and without the influence of other external factors, one can compare weights calculated for different data sets.
Roger
 
Hi John,
Can you give a brief description how NSG develops the weight a sub if that sub has less number of photometry stars than other subs.

Thks,
Roger
  1. NSG detects the stars in both the reference and target images. Some of these detected stars might not be real. Others might be saturated or too close to saturation.
  2. It looks for the brightest value in each image. It assumes this is the saturation value for that camera, and only uses stars that have a peak value 0.7 times this 'saturation' value. If the frame does not contain any saturated values, this may be an underestimate. However, an underestimate does no harm provided there are enough stars. An error in the other direction would reduce accuracy.
  3. It matches the target stars to the reference stars. It does this in two stages. The initial match uses a search window. It then calculates a rough scale factor between the two images. On the second pass we know approximately what brightness the matched star should have from our rough scale factor. For a successful match the star must be within the search window and be close to the expected brightness. Both parameters are settable within the use interface, but the defaults usually do a good job.
  4. We now have matched pairs of stars; the reference star and the same star in the target image. Steps (2) and (3) have removed the saturated stars and the falsely detected stars.
  5. The star flux is measured using aperture photometry. The aperture is calculated from the shape of both stars (the union of their bounding rectangles). This aperture tracks with the star centers. These two steps ensure that we are measuring exactly the same part of the sky even if the images are distorted or when the star profiles are significantly different.
  6. We can now calculate the scale accurately. This is done by doing a linear fit through all the star pairs (after removing the worst outliers). The scale is the gradient of this line.
  7. NSG calculates the signal to noise ratio. We know the relative astronomical signal between the reference and target images (this only includes the light that actually came from the star/galaxy/nebula. The sky brightness has been subtracted). The formula is simply: (Relative astronomical Signal) / (Relative Noise)
  8. The weight is this signal to noise ratio squared.

For example, if you took a 1 minute and 4 minute exposures in identical conditions, the signal to noise ratio would be 2 (the square root of 4). The 4 minute exposure is clearly worth 4 x 1 minute exposure, so the weight clearly needs to be 4 (the square of the signal to noise ratio).
 
  1. NSG detects the stars in both the reference and target images. Some of these detected stars might not be real. Others might be saturated or too close to saturation.
  2. It looks for the brightest value in each image. It assumes this is the saturation value for that camera, and only uses stars that have a peak value 0.7 times this 'saturation' value. If the frame does not contain any saturated values, this may be an underestimate. However, an underestimate does no harm provided there are enough stars. An error in the other direction would reduce accuracy.
  3. It matches the target stars to the reference stars. It does this in two stages. The initial match uses a search window. It then calculates a rough scale factor between the two images. On the second pass we know approximately what brightness the matched star should have from our rough scale factor. For a successful match the star must be within the search window and be close to the expected brightness. Both parameters are settable within the use interface, but the defaults usually do a good job.
  4. We now have matched pairs of stars; the reference star and the same star in the target image. Steps (2) and (3) have removed the saturated stars and the falsely detected stars.
  5. The star flux is measured using aperture photometry. The aperture is calculated from the shape of both stars (the union of their bounding rectangles). This aperture tracks with the star centers. These two steps ensure that we are measuring exactly the same part of the sky even if the images are distorted or when the star profiles are significantly different.
  6. We can now calculate the scale accurately. This is done by doing a linear fit through all the star pairs (after removing the worst outliers). The scale is the gradient of this line.
  7. NSG calculates the signal to noise ratio. We know the relative astronomical signal between the reference and target images (this only includes the light that actually came from the star/galaxy/nebula. The sky brightness has been subtracted). The formula is simply: (Relative astronomical Signal) / (Relative Noise)
  8. The weight is this signal to noise ratio squared.

For example, if you took a 1 minute and 4 minute exposures in identical conditions, the signal to noise ratio would be 2 (the square root of 4). The 4 minute exposure is clearly worth 4 x 1 minute exposure, so the weight clearly needs to be 4 (the square of the signal to noise ratio).
So in steps 4, 5, and 6 it does not matter that the same stars may not be matched to the reference in (for example) image A and image B and image C? I think this is my question as it seems to me that different sets of matches in different images will give different results.

Of course I trust all you are doing, just trying to better understand it.... sorry John, but Adam Block made me this way!?

Roger
 
So in steps 4, 5, and 6 it does not matter that the same stars may not be matched to the reference in (for example) image A and image B and image C? I think this is my question as it seems to me that different sets of matches in different images will give different results.

Of course I trust all you are doing, just trying to better understand it.... sorry John, but Adam Block made me this way!?

Roger
Lets suppose the reference image is A. B and C are target images.
  1. When we compare A with B, we find matching stars that were detected in both A and B which were in the detectors linear range. We could determine the scale from any of these matches, but using a linear fit of all the matches is even better.
  2. When we compare A with C, we follow the same process. The stars used for A - C might not be the same as those used for A - B. For example a matched star in C could be too bright so that matched pair is rejected, or the matching star was not detected (it might be too faint or slightly off the field of view). Does it matter if we have a few missing pairs, or a few extra matched pairs (or both)? Provided the detector is linear, and we have correctly matched the stars, the result is likely to be just as good.
You can see this for yourself. Look at the photometry graph within NSG. You should be able to see that removing a couple of points will have very little effect on the best fit line.

So what does affect the accuracy? The photometry is being done on registered stars. This is not ideal because almost all registration algorithms do not fully conserve star flux. This will introduce an error. Ideally I would like to measure matching stars in the unregistered images. To do this I would need to determine the transform required to register the images without actually doing the registration. I would then still be able to find the same star in the target frame, having accounted for rotations and translations. If the photometry error due to registration is significant compared to the typical error in the noise estimate, this would produce better results. However, I have no plans to do this in JavaScript!

1637665877999.png

The data plotted in this graph was from a single image run, with no meridian flip. By increasing the size of the search window, NSG can run on either the unregistered or registered images and still match the stars correctly. You can see that the registration did reduce the accuracy (the difference between the red and blue lines).
 
Hi John,
You are really, really good at answering questions in an understandable way.
Yes, a few missing points on the linear fit is not going to greatly affect the slope. On one data set I played around with rejecting more and more of the points in linear fit, and the slope only marginally changed.
The robustness of the NSG script, and it's author, is unquestionable!
Best regards,
Roger
 
Thanks for the great script!

Question: Is there a way to call the NSG script within my own PI script and feed in the necessary parameters? Looking to add this to some automated processing.
 
Last edited:
Status
Not open for further replies.
Back
Top