NormalizeScaleGradient: Bookmark website now!

Status
Not open for further replies.
Hi John,
This is a really amazing script! I spend the past week restacking my favorite images, and the difference is actually pretty noticeable!

One thing: I use a DSLR with a telescope, so the focal length in the fits header for some reason defaults to exactly 50.0001mm. In the NSG script, I can't change the focal length once it is in the fits header, so I have to use an image container + the fits header script.

Is there any way to change the script so that I can change the focal length/pixel size of the reference image even if it is already written in the fits header?

Thanks so much!
Will
 
I just tried this for the first time today after watching Adam Block's Fundamentals videos on it (nice videos, as always!) I got drizzling to work by modifying the file paths in the xdrz files to reference the _r_nsg.xisf instead of the _r.xisf files, and kept this process easy to do by outputting the nsg files into the same directory. Then I was able to add the xdrz files and enable drizzling on the ImageIntegration process set up by NSG. Does anyone have reservations about this type of approach working?

What I got was pretty similar to local normalization, but my data are fairly clean (6 hours of 3nm narrowband data from a QHY600). I'm trying a few iterations of the NSG configuration to see if I can improve SNR further.
 
One thing: I use a DSLR with a telescope, so the focal length in the fits header for some reason defaults to exactly 50.0001mm. In the NSG script, I can't change the focal length once it is in the fits header, so I have to use an image container + the fits header script.
This is because the camera expects the lens to pass the information about focal length, and astronomic telescopes don't have this feature. The 50 mm seem to be used as default when the information is missing. For future projects, there are two possibilities for avoiding the need to alter metadata subsequently:

1. Use the FITS format instead of the proprietary raw format of the camera.
In the capturing software (e.g. APT, SGP, NINA, etc.), set the output file format to FITS (NINA also supports the XISF file format). Then set the focal length (and other important parameters). These data will be written as metadata to the FITS header of each file. PixInsight will evaluate the FITS header and use this information.

2. Set the focal length (and optionally the aperture) in the RAW Format Preferences
Open the Format Explorer, double-click on "RAW" to open the 'RAW Format Preferences', enable the option 'Force focal length' and set the parameter 'Focal length'. The same holds for aperture.

In this way the information is either stored during saving of the original file (case 1) or altered when the proprietary raw file is opened in PixInsight (case 2). No need to use a script or an ImageContainer to write the information subsequently to each individual file.

Bernd
 
This is a really amazing script! I spend the past week restacking my favorite images, and the difference is actually pretty noticeable!
Great! Good to hear! :)
One thing: I use a DSLR with a telescope, so the focal length in the fits header for some reason defaults to exactly 50.0001mm. In the NSG script, I can't change the focal length once it is in the fits header, so I have to use an image container + the fits header script.
The advice given by Bernd sounds really good.
NSG gets the focal length from the chosen reference image, so an alternative is to just modify this image's FITS header.

Regards, John
 
I just tried this for the first time today after watching Adam Block's Fundamentals videos on it (nice videos, as always!)
Yes, Adam Block's videos are really good :) His free youtube videos are also well worth watching.

I got drizzling to work by modifying the file paths in the xdrz files to reference the _r_nsg.xisf instead of the _r.xisf files, and kept this process easy to do by outputting the nsg files into the same directory. Then I was able to add the xdrz files and enable drizzling on the ImageIntegration process set up by NSG. Does anyone have reservations about this type of approach working?
I am currently porting the script to C++ (it will also be renamed PhotometricLocalNormalization). This will allow it to work properly with drizzle.
If you can't wait (the port will take some time), you could try the solution described by an earlier post:
I think this would work, but be warned - it is a bit hacky!

Regards, John Murphy
 
2. Set the focal length (and optionally the aperture) in the RAW Format Preferences
Open the Format Explorer, double-click on "RAW" to open the 'RAW Format Preferences', enable the option 'Force focal length' and set the parameter 'Focal length'. The same holds for aperture.

Thanks so much! That worked beautifully!

William
 
John, would you expect the approach I'm following to do the right thing? Redirecting the xdrz files to the NSG files is pretty easy with sed on the command line, and ImageIntegration updates the elements of that file.

Also, could you help me understand the advantages of this approach compared to LocalNormalization? From your documentation, the impression I get is:
1) Using photometry instead of median brightness to determine a scaling factor per image grid element
2) Fitting scaling factors to a curve instead of doing each grid element individually (actually this is an assumption about LocalNormalization. I don't know if it actually does that)
3) Computing weight factors using some form of SNR analysis instead of noise. Is this akin to using SNRWeight in SubframeSelector?

I am comparing results to what I get with LocalNormalization and they are pretty close. I see a slight SNR improvement in the integration, and a slight SNR decrease in the drizzled result. Is this an appropriate metric to gauge improvements between the approaches?
 
John, would you expect the approach I'm following to do the right thing? Redirecting the xdrz files to the NSG files is pretty easy with sed on the command line, and ImageIntegration updates the elements of that file.
I think the drizzle process is rather more complicated than that. If you really want to use drizzle with NSG, you will need to follow the instructions I gave (see message #225)

Also, could you help me understand the advantages of this approach compared to LocalNormalization? From your documentation, the impression I get is:
1) Using photometry instead of median brightness to determine a scaling factor per image grid element
2) Fitting scaling factors to a curve instead of doing each grid element individually (actually this is an assumption about LocalNormalization. I don't know if it actually does that)
3) Computing weight factors using some form of SNR analysis instead of noise. Is this akin to using SNRWeight in SubframeSelector?

I am comparing results to what I get with LocalNormalization and they are pretty close. I see a slight SNR improvement in the integration, and a slight SNR decrease in the drizzled result. Is this an appropriate metric to gauge improvements between the approaches?
The main advantages of NSG compared to LocalNormalization are accuracy and robustness.
  • Measuring the scale factor from stellar photometry is very accurate. No other method used to determine the scale factor can compete with this.
  • Once the scale factor is known, the gradient can then be subtracted. The accuracy of this process is very dependent on the accuracy of the scale factor.
  • NSG uses the noise estimate stored in the NOISExx headers. ImageIntegration's 'evaluate noise' option does the same. However, the noise must be scaled before it can be used to create the weights. NSG is much more accurate (again, due to the photometrically determined scale factor).
  • NSG is robust enough to safely use on all images. LocalNormalization must be used with caution; it often introduces artifacts.
 
NormalizeScaleGradient v1.4
1628784727426.png


I have added a new check box: "Weight prefix". If selected, the output files are prefixed with the calculated weight. For example, if an input image named "filename.xisf" results in a weight of 0.894, the output filename would be: "w89_filename_nsg.xisf" This allows the files to be listed in order of weight in a folder browser. I have only included two decimal places to avoid long filenames. The FITS header contains the weight with a higher accuracy.

Example folder listing:
1628784975897.png


It also makes it easier to decide which images to reject within ImageIntegration:
1628785367150.png


John Murphy
 
Last edited:
After more testing, I have realized it would be helpful if the NSG corrected images were sorted by weight before adding them to ImageIntegration. I have therefore added a 'Sort by weight' option within NSG's ImageIntegration section:

1628860613745.png


The reference image is always added as the first image.
All other images are then either sorted by weight or by (original) filename.

By checking 'Sort by weight', the images with the highest weights will be added to ImageIntegration first. This makes it easy to disable the images with the lowest weights. Select images at the end of ImageIntegration's 'Input Images' table, and use ImageIntegration's 'Toggle Selected' button.

1628863090794.png


John Murphy
 
Last edited:
John,
The new sorting is a great time saver. Thanks.

Can you advise if you have done some comparisons of resulting image quality with NWeight vs other image integration options?
Other than improved gradient, which is a huge improvement, I am having trouble to see, or to measure my integrated images to determine which is best for post processing. The attached file has the descriptions of the integrations and quantitative results.

I am surprised the NSG script (Ver 1.3) does not provide a clear winner for my light polluted data.

Attached are the measurements I did with PI processes & scripts of 4 different integrations of the same calibrated, cosmetically corrected, debayered and registered. I used Statistics, DynamicPSF, and Noise Evaluation - CFA Bayer.
If you, or someone else has any comments on this data and the screen shots of the 4 images I would appreciate it.

Roger
 

Attachments

  • DynamicPSF Stars Selected and Results to post.jpg
    DynamicPSF Stars Selected and Results to post.jpg
    980.2 KB · Views: 80
  • First two Images for Comparison.jpg
    First two Images for Comparison.jpg
    245.9 KB · Views: 68
  • Second two Images for Comparison.jpg
    Second two Images for Comparison.jpg
    535.5 KB · Views: 70
The new sorting is a great time saver. Thanks.
:)
Can you advise if you have done some comparisons of resulting image quality with NWeight vs other image integration options?
Other than improved gradient, which is a huge improvement, I am having trouble to see, or to measure my integrated images to determine which is best for post processing. The attached file has the descriptions of the integrations and quantitative results.

I am surprised the NSG script (Ver 1.3) does not provide a clear winner for my light polluted data.

Attached are the measurements I did with PI processes & scripts of 4 different integrations of the same calibrated, cosmetically corrected, debayered and registered. I used Statistics, DynamicPSF, and Noise Evaluation - CFA Bayer.
If you, or someone else has any comments on this data and the screen shots of the 4 images I would appreciate it.
As you demonstrate here, there are many ways of calculating image weights. Each method's accuracy will depend on different things. Methods can vary from being highly consistent, getting a good answer all the time, to only occasionally doing well, and anything in between. In your test data set, it appears that all the methods you used produced equally good results. If you cannot see the difference in the stacked result, the difference is not significant.

Personally, I would be cautious about reading too much into the noise statistics you have produced. Accurately calculating image noise is a very tricky problem. I think PixInsight's noiseMRS method does a great job, but it will not be perfectly accurate.

I am confident that noiseMRS accuracy is better than 1 part in 2, but is it accurate to 1 part in 10? Or 1 part in 100 (2 decimal places)? Some testing I did may indicate that the calculated noise has a small dependency on the image's gradient. If this is true, it would mean that if two images had different gradients, small differences in the noise result could be either down to the actual difference in noise, or due to differences in the gradient.

Before noiseMRS results can be compared, they need to be scaled. This is also true for most of the other statistics you are looking at. At the very least, you should use NSG to normalize your 4 stacks before comparing the statistics.

Even if you normalize the stacked images, the noise values may still be misleading. A stacked image is created from registered images. Each registered image has been shifted to a fraction of a pixel. This introduces a smoothing effect, which varies depending on the size of the fraction. A integral shift (zero fraction) produces no smoothing. A shift of 0.5 pixels will have a large smoothing effect. If a high weight is assigned to a smoothed image, the final noise in the stacked image will look less - it is the same as applying a bit of smoothing to the final stacked image. This does not make the stacked image better, but it will fool the noise estimate.

Note that you can use the noise estimate of the stacked image to provide a guide on how effective ImageIntegration's data rejection was because this does not change the image weights, or the registration smoothing, so the relative results are still valid.

I have not used the scaled noise evaluation script, but I would have thought that the Bayer CFA Version is not the right version for an image that has already been debayered.

Here is my personal run down of some of the methods used to calculate weights:

Number of stars
The number of detected stars will depend on the signal to noise ratio, but it also depends on:
  • how steady the atmosphere is
  • how good the guiding is
  • focus drift.
If all three of these dependencies remain constant for all images, the number of stars will correlate to the signal to noise ratio. This is more likely to be the case for under sampled images where stars are typically smaller than a pixel.

I would use this method if I was imaging a star cluster, because it will favor the images with the sharpest stars. However, it will not help bring out the detail in faint nebula. For faint objects, it's shot noise that determines the level of detail (see https://en.wikipedia.org/wiki/Shot_noise).

ImageIntegration Noise evaluation
This calculates the weights from the scaled noise estimate. The noise was calculated from the images before registration or debayer interpolation, which improves the accuracy. The noise is calculated by using noiseMRS, and it is stored in the FITS header (NOISExx).

ImageIntegration has an algorithm that determines the scale factor for each image. The accuracy of the scale factor algorithm is critical, because the final weight depends on the square of the scale.

NormalizeScaleGradient
This also uses the noise estimates stored in the NOISExx FITS headers.
The only significant difference between NSG and ImageIntegration's noise evaluation, is the way the scale factors are calculated. NSG uses stellar photometry. The accuracy of the scale factor is critical, because the final weight depends on the square of the scale.


After NSG has completed, it displays ImageIntegration. You can change the method used to determine the weights. You should also check that the data rejection settings are what you want.
 
Last edited:
John,
First, thank you for your above detailed explanations. It helps me, and I am sure others, with our learning and understanding about noise, integration, and even the effect of sub pixel star registration on noise calculations.

I just ran the NSG Ver 1.3 on the 4 integrated images (of same 100 raw subs). I did not integrate the 4 images into a double integrated image. Here is the NSG Process Console summary:
***************
Summary

Using noise estimates from FITS header: NOISE00 NOISE01 NOISE02

[1], M101_NSG1pt3_integrated_100images, 0s, NWEIGHT: 1.0, Reference

[2], NormIntwNoiseEval_for_Normalization_Integration_100images_registered, 0s, NWEIGHT: 0.965 (0.987,0.952,0.957), Truncated 1.035 to 1.0

[3], RGB_MEDIANWEIGHT_Integration_all_100_registered, 0s, NWEIGHT: 0.930 (0.956,0.901,0.933), Truncated 1.029 to 1.0

[4], RGB_STARSWEIGHT_Integration_all_100_registered, 0s, NWEIGHT: 0.927 (0.955,0.893,0.933), Truncated 1.025 to 1.0
*****************
My conclusions & comments:
  1. Based on above I am now convinced your NSG script does give the best result. I think 7% better is significant! It may not be visible in the linear state, but after post processing steps it may be much more visible.
  2. I would note that using the PI default of Noise Evaluation in Image Integration is better than Subframe Selector value of Median or Number of Stars.
  3. It is my opinion that my raw data was bad enough that some of my images fooled the noise evaluation, but I had 100 images. So if 5 subs had the wrong weighting, it is not going to hurt very much. But if I only had 20 subs, and 5 were fooled, then the result would be worse than NSG script.
  4. Do NSG script results make any (general) conclusion that the background noise is higher with the lower NWEIGHTS?
  5. I love the NSG script name. I hope you do not change it!
Your provided script documentation is very clear, but Adam Block's videos with real examples, and excellent instruction, are well worth the investment!

Roger
 
Last edited:
Is it possible to increase the size of the script page ? With files with long names it is impossible to pick your reference frame. Or a horizontal scroll would also work.
Thank you
 
My conclusions & comments:

Based on above I am now convinced your NSG script does give the best result. I think 7% better is significant! It may not be visible in the linear state, but after post processing steps it may be much more visible.
Yes, I agree, 7% is significant. :)

It is my opinion that my raw data was bad enough that some of my images fooled the noise evaluation, but I had 100 images. So if 5 subs had the wrong weighting, it is not going to hurt very much. But if I only had 20 subs, and 5 were fooled, then the result would be worse than NSG script.
Good point! I was being too pessimistic. Provided the stacked image's noise estimate is correctly scaled, it is useful.

Do NSG script results make any (general) conclusion that the background noise is higher with the lower NWEIGHTS?
NWEIGHT is inversely proportional to the square of the scaled noise.

Your provided script documentation is very clear, but Adam Block's videos with real examples, and excellent instruction, are well worth the investment!
His videos are very good. Well worth watching. I particularly like that he explains why, instead of just giving a recipe.
 
Last edited:
NWEIGHT is inversely proportional to the square of the scaled noise.
Very interesting and useful.

Is there a way to make the version of the script show up in the Process Console output? If I go back later I cannot determine which version I was running. Maybe I missed it. Perhaps add the version into the script file name, which is in the Console.

Thanks,
Roger
 
John,
I am a little twisted around on the Photometry Stars vs Sample Generation vs Gradient. Please open the attached .jpg screen capture.
1. Are Photometry Stars (in attached) the 'potential' stars for gradient evaluation? Are these then correspondingly removed by all the red circle stars/areas in Sample Generation?
2. There are points showing in the Horizontal Gradient (in the ellipse) that are within the galaxy. Should I be manually excluding the entire galaxy out to about X = 2700? I think best to manually exclude the entire galaxy.

It would be convenient to be able to have the Sample Generation window open at same time Gradient Path is open so we can visually see the location. Of course we can close one and open the other, then go back. Or make screen prints.... Just my thoughts.

Best regards,
Roger
 

Attachments

  • NSG Gradient and Sample Generationl question.jpg
    NSG Gradient and Sample Generationl question.jpg
    695.1 KB · Views: 73
1. Are Photometry Stars (in attached) the 'potential' stars for gradient evaluation? Are these then correspondingly removed by all the red circle stars/areas in Sample Generation?
The Photometry Stars are only used for stellar photometry to measure star flux. The reference stars are then plotted against the target stars, and a best fit line is fitted to the points. The scale factor is calculated from the slope (gradient) of this best fit line.

Stellar photometry measures both the total flux (inner rectangle; star + background) and the background flux (outer rectangle). The star flux is then calculated from these two measurements. It can therefore be accurately determined even if the star is in front of a galaxy or nebula.

The Photometry Stars are not used to determine the background gradient across the image.

2. There are points showing in the Horizontal Gradient (in the ellipse) that are within the galaxy. Should I be manually excluding the entire galaxy out to about X = 2700? I think best to manually exclude the entire galaxy.
The relative image gradient is calculated from the generated samples squares. Bright point sources (stars) can cause problems, so rejection circles are used around the brightest stars. The samples within a rejection circle are not used to calculate the image gradient. These rejection circles only reject samples. They do not affect which stars are used for photometry.

Provided the scale factor (determined from the stellar photometry) is accurate, and a single scale factor is valid for the whole image, then it is actually desirable to have the sample squares cover the nebula or galaxy. Remember, we are not trying to find the background. We are only trying to measure the difference between the reference and target image. Hence samples over the nebula or galaxy help contribute to the relative gradient model.

A single scale factor for the whole image may sound very restrictive, but it is actually usually valid. The scale factor is only dependent on the transmission through the atmosphere. It is completely independent from light pollution - for example the scattered light from the Moon, the Sun (less than 18 degrees below the horizon) or artificial lights. So it is common to have very strong gradients, but still only have a single scale factor.

However, if you have slow moving, uneven clouds, then it is likely that a single scale factor will not be valid. I think this describes the challenging conditions that are typical for your location. If the gradient graph tracks the brightness of the galaxy or nebula, this usually means there is an error in the scale factor. In this situation, it may well be worth adding one or more manual rejection circle over the region that causes the gradient graph anomaly.

The other possibility is an inaccurate result from the stellar photometry. This can happen if a process has been applied that does not conserve flux. The gradient graphs are very sensitive and can show tiny deviations.

It would be convenient to be able to have the Sample Generation window open at same time Gradient Path is open so we can visually see the location. Of course we can close one and open the other, then go back. Or make screen prints.... Just my thoughts.
Yes it would! Unfortunately JavaScript is single threaded, and I am not sure if it is possible to display two active dialogs at the same time. I plan to address this issue in the C++ version, which will allow multiple active windows.

Regards, John Murphy
 
Last edited:
Status
Not open for further replies.
Back
Top