NormalizeScaleGradient: Bookmark website now!

Status
Not open for further replies.
Apologies if this is a dumb question, however, I am really confused.

Question regarding the new WBPP script and NSG. With the new WBPP script- the new post-processing feature allows for the channels to be split. Can NSG be applied to the split frames or are the frames still required to be recombined and registered. Also, with all the new updates to NSG, is LocalNormalization becoming obsolete? If Local Normalization is still useful, any recommended best practices?

Best,

Brian
 
Apologies if this is a dumb question, however, I am really confused.

Question regarding the new WBPP script and NSG. With the new WBPP script- the new post-processing feature allows for the channels to be split. Can NSG be applied to the split frames or are the frames still required to be recombined and registered. Also, with all the new updates to NSG, is LocalNormalization becoming obsolete? If Local Normalization is still useful, any recommended best practices?

Best,

Brian
NSG requires registered files. Running the Registered R, G, B files through NSG separately has one advantage - you can then specify different parameters for each channel.
 
H John,
I have done a comparison of image integration (3 runs) using Nweight, PSF Signal and then PSF Power as keywords during integration.
I had 77 NGC6888 color CMOS images over several nights of a wide range of quality due to light pollution, air pollution/fog, and some with clouds. This message is comparing only the Red channel from WBPP2.3.1 which was set up to process and output R, G, & B separately. Post processing was registration only. I then used NSG 1.4.4 on the red data.
After running the NSG, the Nweights ranged from 1.19 to .05. I cancelled all images below .20 before integration. (18 of 77 cancelled). I only changed the keyword for each set of integrations.

Image Integration Process Console results:
NGC6888 Red Channel comparison of imageintegration with differentkeywords.
Process Console after Image Integration
Integration Keyword:
NWEIGHT
Integration Keyword:
PSF Signal
Integration Keyword:
PSF Power
Scale estimates :(1.285395e-03,
2.594523e-03)
(1.311604e-03,
2.584755e-03)
(1.301739e-03,
2.585896e-03)
Location estimates :1.56E-011.56E-011.56E-01
Noise scaling factors :6.94E-036.90E-036.92E-03
Scaled noise estimates :6.58E-027.12E-027.00E-02
SNR estimates :2.31E+021.97E+022.04E+02
PSF signal weights :1.35E+031.27E+031.29E+03
PSF power weights :6.15E+065.51E+065.65E+06
PSF fit counts :99131008310111

Looking at the 3 integrated images with the same stretch I can barely see any differences in the images looking at 3:1 zoom of background and signal in several areas. I can only say I see something different. From the above data it seems that Nweight is giving a better result than the others.

Appreciate your comments or questions about the results and my methods. Your script is a real saver for me due to equalizing the backgrounds to the most simple gradient background image. Thanks!!!

Roger
 
Thanks for doing that analysis, Roger.

My question is whether the difference in PSF fit counts can be responsible for the differences in other metrics? If PSF Power calculates statistics on about 200 more stars are those fainter stars that would pull down some of the other quality metrics? Could the fact that more stars were fit actually indicate superior weighting?
 
To test NSG's weights, I wanted to use real data in a scenario where the signal to noise ratio and the weight could be accurately predicted. The test results can then be compared with the predictions.

There happens to be one situation where we can predict what both the signal to noise ratio and the weight should be. Provided the sky conditions stay stable, simply change the exposure time. If we expose for 2 times longer, the signal to noise ratio will be the square root of 2 times higher. The optimum weight will be 2 times higher.

In my tests, I interleaved the long and short exposures so that I could check if the sky conditions had changed. I made a minor change to NSG so that it also writes its calculated scale factor to a FITS header (NSGS0 for channel zero). This makes it easy to use Blink statistics to output all the relevant headers. See the attached script. The NSGS scale factor should be the same as the exposure ratio. NWEIGHT should also be the same as the exposure ratio, but expect a slightly larger error margin because it also depends on the NOISExx noise estimate.

I am currently in a conversation with Juan to discuss how NWEIGHT, PSF Signal and PSF Power compared. However, he is quite busy at the moment, so it may be some time before he can look into this.
 
Last edited:
This update improves star detection. Previously some bright stars could be missed. This will now happen less often.

It also improves performance for images that had more than 2000 stars:
All reference image stars are detected. A central rectangle that includes 2000 reference stars is then used to limit the target image star detection. Approximately 500 reference - target image star pairs are then used to calculate the scale factor. 500 stars is more than enough to get an accurate linear fit, so there is no loss of accuracy.

The scale factor is now written to the header (see the previous message).

This version will be included in the next PixInsight release.
 
Hello

I am using NSG with a set of images from a OSC with split channels and superpixel method in WBPP.

My question is about the image scale in NSG. Should it still use the actual camera pixel size or x2.

Thank you for your great work. I am looking forward to the C++ version.
 
Hello

I am using NSG with a set of images from a OSC with split channels and superpixel method in WBPP.

My question is about the image scale in NSG. Should it still use the actual camera pixel size or x2.

Thank you for your great work. I am looking forward to the C++ version.
It needs the logical pixel size, not the actual camera pixel size.

So, for example, if an image has 2x binning, the pixel size should be 2x the size of the actual camera pixels.

IntegerResample automatically updates the pixel size to be 2x the original. Hopefully the superpixel method does the same thing. If not, you will need to manually override the header entry and multiply it by 2.
 
It needs the logical pixel size, not the actual camera pixel size.

So, for example, if an image has 2x binning, the pixel size should be 2x the size of the actual camera pixels.

IntegerResample automatically updates the pixel size to be 2x the original. Hopefully the superpixel method does the same thing. If not, you will need to manually override the header entry and multiply it by 2.

Ok, then I have to x2 as I have to in PCC for the platesolve to work.

Thank you again for your excellent work and support.
 
Happy Christmas / Festive season!
My current contribution to the astronomy community: The latest update for NSG (see attached zip), and up to date documentation (see link). Both of these have been uploaded to PixInsight and will therefore also be available in the next PixInsight release (-12).

Install this documentation to your PixInsight folder. For example, on windows, the installed files should include:
C:\Program Files\PixInsight\doc\scripts\NormalizeScaleGradient\NormalizeScaleGradient.html
C:\Program Files\PixInsight\doc\scripts\NormalizeScaleGradient\images\...


I would also like to thank the 25 users that have 'bought me a coffee' (made a donation) at

https://ko-fi.com/jmurphy

It is good to know that my hard work is being appreciated. Many thanks, and enjoy Christmas / the festive season!
 
Last edited:
Hi John,

You may already know this but I just explored the relationship between the latest PSF Signal weights and NWEIGHT. PSF Signal Power is more closely correlated with NWEIGHT. This is using data from the Witch Head nebula dataset referenced in this message (with corrections to my interpretation in my followup message there).

John

PSF Signal WeightPSF Signal Power Weight
Correlation with NWEIGHT0.93330.9826
Variance in NWEIGHT Explained by0.87020.9652

PSFSignalWeight_vs_NWEIGHT.png
PSFSignalPowerWeight_vs_NWEIGHT.png
 
Last edited:
@jmurphy @Juan Conejero,

I investigated weighting a dataset by PSF Signal Weight, PSF Power Weight, and NWEIGHT on data that has been normalized by NSG. For comparison I also include weighting by PSF Signal Weight on data that has not been normalized with NSG. Here are the results:

Integration WeightNSG normalizationPSF Signal WeightPSF Signal Power Weight
PSF signal weightno1,560.724,119,000
PSF signal power weightyes1,605.025,132,000
PSF signal weightyes1,612.425,015,000
NWEIGHTyes1,713.125,027,000

If the objective is to maximize the PSF Signal Weight of the integrated image, NWEIGHT performs best, by a considerable amount. All three weighting methods produce similar PSF Signal Power Weights in the integrated image.

John
 
Hi John,

You may already know this but I just explored the relationship between the latest PSF Signal weights and NWEIGHT. PSF Signal Power is more closely correlated with NWEIGHT. This is using data from the Witch Head nebula dataset referenced in this message (with corrections to my interpretation in my followup message there).

John

PSF Signal WeightPSF Signal Power Weight
Correlation with NWEIGHT0.93330.9826
Variance in NWEIGHT Explained by0.87020.9652

View attachment 13066View attachment 13067
Thanks, an interesting analysis of NWEIGHT and PSF weights (PixInsight -12 versions).

The NormalizeScaleGradient NWEIGHT has a single goal - to optimize the signal to noise ratio (the reason I concentrate on this single measure is explained here: https://pixinsight.com/forum/index.php?threads/subframeselector-evaluation.17696/post-107507 . Note that there are situations where you might want a weight to depend on star profiles. See https://pixinsight.com/forum/index.php?threads/subframeselector-evaluation.17696/post-107511 ).

The NSG algorithm is based on the physics. Provided that the noise is dominated by shot noise, it will work in all situations (the physics does not change).
  • NSG determines the astronomical signal directly from stellar photometry. Note that by measuring the astronomical signal in this way, it excludes light pollution, which would invalidate the result.
  • It uses the PixInsight calculated noise estimate, derived from calibrated but unregistered images. It is really important that the noise estimate was from unregistered images, because registration has a smoothing effect on the noise which is not consistent between images.
  • NWEIGHT is then the square of the signal to noise ratio: ((Astronomical signal)/(PixInsight noise estimate))^2.
You can check that the square of the signal to noise ratio works with the following thought experiment:
  • Take a 1 minute and 4 minute exposure in identical conditions.
  • The relative signal to noise ratio of each image will be the square root of the exposure time (1 and 2). This relationship holds true provided that the noise is dominated by shot noise.
  • We calculate the weight by squaring the signal to noise ratio. We then get weights of 1 and 4. We can see that this is correct. The 4 minute exposure must be worth four times as much as the 1 minute exposure.
The NWEIGHT accuracy depends upon the PixInsight noise estimate, and the accuracy of the stellar photometry. The stellar photometry is performed on registered images, which has the advantage that we can be sure we are comparing the same stars in the reference and target images. However, the disadvantage is that the registration process will not fully conserve star flux, which will introduce an error. Is this error significant?

The best way to test this is to take exposures of different length in identical conditions. From the thought experiment, it can be shown that the noise should be proportional to the square root of the exposure time ratio. The signal should be proportional to the exposure time. If the signal error is smaller than the noise error, then using registered images is OK. My own tests indicate that it is OK, but you should perform your own tests. NSG writes the calculated scale factor to the FITS header NSGS0, NSGS1 and NSGS2. The PixInsight noise estimate: NOISE00, NOISE01, NOISE02. You can extract these header values using Blink statistics.

The PSF weights are calculated using a very different strategy and use different algorithms. I would therefore expect the relationship between PSFSignalPower and NWEIGHT to depend on the data set. PixInsight has always tried to provide the user with plenty of choice. Once PixInsight -12 is released, you will be able to choose from 3 different algorithms (PSFSignalWeight, PSFSignalPowerWeight, NWEIGHT), which must be a good thing. :)
 
@jmurphy @Juan Conejero,

I investigated weighting a dataset by PSF Signal Weight, PSF Power Weight, and NWEIGHT on data that has been normalized by NSG. For comparison I also include weighting by PSF Signal Weight on data that has not been normalized with NSG. Here are the results:

Integration WeightNSG normalizationPSF Signal WeightPSF Signal Power Weight
PSF signal weightno1,560.724,119,000
PSF signal power weightyes1,605.025,132,000
PSF signal weightyes1,612.425,015,000
NWEIGHTyes1,713.125,027,000

If the objective is to maximize the PSF Signal Weight of the integrated image, NWEIGHT performs best, by a considerable amount. All three weighting methods produce similar PSF Signal Power Weights in the integrated image.

John

As I already have said, PSF Signal Weight and PSF Signal Power Weight use different paradigms to estimate the signal-to-noise ratio. PSFSW uses the inverse coefficient of variation paradigm, while PSFSPW uses the ratio of powers paradigm. Both methods are incompatible in their interpretation of the data, so comparing them makes no sense IMO. From John's description, his NSG method also applies the ratio of powers paradigm. Whether one of these methods is more efficient or applicable than the other is a subject open to discussion. My tests with many different data sets seem to show that it depends on the properties of the data being evaluated.

The goal of the new PSF signal weighting methods is to provide comprehensive and accurate image quality estimators tightly integrated with PixInsight's image preprocessing pipeline, from image calibration and demosaicing to image integration and drizzle integration, including all intermediate tools and scripts, such as SubframeSelector, WBPP and others. The new algorithms attempt to be sensitive to a wide variety of image quality variables, as your regression analysis has shown. I disagree with the idea that only SNR is important. While it is evident that SNR is crucial, other metrics cannot be overlooked to achieve an optimal integrated result. This includes PSF dimensions and profiles, which the PSF signal weighting algorithms incorporate in their generated estimates.

I would include LocalNormalization in your tests. The version of this process included in PixInsight 1.8.8-12 has several improvements that make it able to perform very accurate and robust normalizations, including the possibility to separate the additive and multiplicative components of the normalization function much better than previous versions. Finally, I would also include SNR estimates, since they implement the inverse noise variance maximum likelihood estimator, which is robust when applied to an integrated image after proper outlier rejection and normalization.
 
Thank you both for your detailed replies. I admit to have taken a fairly naive empirical approach to trying to determine which weighting method might produce the best results. I apologize where this "makes no sense".

Here I extend the previous table to include evaluation of SNR in the integrated image. These SNRs were calculated by SubframeSelector, and the process did give the following warning, so maybe the numbers are invalid: "Warning: Noise estimates are not available in the image metadata and are being calculated from possibly non-raw or uncalibrated data. Image weights can be wrong or inaccurate." I am unsure why it says that noise estimates are not available because ImageIntegration did evaluate noise and insert keywords NOISE00, NOISEL00, and NOISEH00 into each of the images.

When I have a chance I will also evaluate the use of LocalNormalization as an alternative to NSG.

Integration WeightNSG normalizationPSF Signal WeightPSF Signal Power WeightSNR
PSF signal weightno1,560.724,119,0002634
PSF signal power weightyes1,605.025,132,0002106
PSF signal weightyes1,612.425,015,0001958
NWEIGHTyes1,713.125,027,0002186
 
John,

If I understand the truncation issue correctly, the safer way to correct it is division so that there is no danger of clipping the target image at the low end. Do you think it might be useful to add a parameter to the target image section, called clipping protection or something, and for NSG to automatically apply this to the target image as a divisor? This would avoid users having to take a pre-processing step with PixelMath and also could offer a default that's likely to work in most circumstances, so that users don't have to guess at an appropriate value.

Practically speaking, to date I have not bothered to try to avoid truncation, mainly because I don't realize it is necessary until after NSG has run and I don't want to bear the computational expense of running it again. But it seems like my star cores might be slightly better if I started using it routinely. Automating this would certainly make that easy.

Happy New Year!

John
 
NormalizeScaleGradient bug fix:

If the target files list is populated, the horizontal scroll bar appears if it is needed. If NSG is then closed and reopened, the scroll bar is not displayed, even if it is needed.

NSG script is now attached to message #340:
 
Last edited:
John,

If I understand the truncation issue correctly, the safer way to correct it is division so that there is no danger of clipping the target image at the low end. Do you think it might be useful to add a parameter to the target image section, called clipping protection or something, and for NSG to automatically apply this to the target image as a divisor? This would avoid users having to take a pre-processing step with PixelMath and also could offer a default that's likely to work in most circumstances, so that users don't have to guess at an appropriate value.

Practically speaking, to date I have not bothered to try to avoid truncation, mainly because I don't realize it is necessary until after NSG has run and I don't want to bear the computational expense of running it again. But it seems like my star cores might be slightly better if I started using it routinely. Automating this would certainly make that easy.

Happy New Year!

John
This is an interesting idea.

At the moment it is up to the user to use PixelMath on the reference image to provide the extra headroom before normalization. For example, multiply the reference image by 0.9 ($T * 0.9). It may be worth always doing this pre-processing step no matter what the chosen normalization method is (I suspect all normalization methods can cause truncation, they just don't output any warnings).

I could add an extra parameter to do this, but I think I would need to check that the reference image is included in the target list before enabling the field. I will think about it for the next version.
 
Minor bug fix...
The NSG weight prefix was designed to allow the normalized files to be listed in weight order in a directory listing. This only worked for weights between 0.99 and 0.10 because I was not adding leading zeros. This has now been fixed and has been submitted to PixInsight.

The attached script also includes the horizontal scrollbar bug fix.
 
Status
Not open for further replies.
Back
Top