NormalizeScaleGradient: Bookmark website now!

Status
Not open for further replies.
Thank you for this script. It seems it is a move in the right direction to in some way characterize the signal to get the right scale.
In the case of variable thin clouds... would it be reasonable (any benefit) to create a small stack of images that represent the cleanest of the bunch (with whatever natural fixed gradient/flat errors they have) and use this as a reference instead of a single frame?

-adam
 
I have attached NormalizeScaleGradient V0.5

I have modified the tooltip / warning dialog advice on setting the gradient smoothness level. For values less than zero the following warning dialog is displayed:

To remove light pollution gradients, gradient smoothness should be set to between 0.0 and 4.0
Lower levels of smoothing can be used to correct complex gradients but check the normalized files and the 'curvature' files carefully.
If any signs of the galaxy / nebula or bright stars are visible in the 'curvature' images, either the gradient smoothness should be increased or, alternatively, add rejection circles over the brightest parts of the image. To add rejection circles:

Open the 'Sample Generation' dialog and ctrl click on the image to add a 'Manual Sample Rejection' circle.
In the 'Manual Sample Rejection' section, adjust the circle radius. For elongated objects, several overlapping circles can be used.
Select 'Ignore' to continue or 'Cancel' to return to the main dialog.


Unless errors are found, this is likely to be the last update for a while. I need to write the help document ...
Regards, John Murphy
 
Last edited:
This is really a super impressive script. I have used this on an example data set of widefield images. With a 4 degree FOV the sky gradient due to airmass/extinction and airglow, even under photometric conditions, is present...especially when looking low in the sky. On top of that, when using a GEM this gradient rotates with change in time since the sky gradient changes in time especially when flipping across the meridian.

Regarding the choice of normalization in ImageIntegration- certainly the *output* normalization (the first one) it makes sense for this to be none.
The normalization for rejection- I wonder if leaving the default is OK? The normalization that is done for pixel rejection isn't going to undo the work of your script and I do not see how there would be a bad interaction if your script does its job- it is just for the calculation of outliers. I just wonder if there is a reason to keep this in place (but I am not smart enough to know).

-adam
 
This is really a super impressive script. I have used this on an example data set of widefield images. With a 4 degree FOV the sky gradient due to airmass/extinction and airglow, even under photometric conditions, is present...especially when looking low in the sky. On top of that, when using a GEM this gradient rotates with change in time since the sky gradient changes in time especially when flipping across the meridian.

Regarding the choice of normalization in ImageIntegration- certainly the *output* normalization (the first one) it makes sense for this to be none.
The normalization for rejection- I wonder if leaving the default is OK? The normalization that is done for pixel rejection isn't going to undo the work of your script and I do not see how there would be a bad interaction if your script does its job- it is just for the calculation of outliers. I just wonder if there is a reason to keep this in place (but I am not smart enough to know).

-adam
I am currently writing the help file. I hope to get the first releasable version finished within the next few days. I intend to provide a detailed description of the concepts behind the script.

The primary purpose of the script is to improve data rejection. The ImageIntegration 'normalization for rejection' runs very quickly but the default option does not handle gradients. An alternative is to use 'Adaptive normalization'. This does handle gradients and runs quickly, but it will not always produce good results because this is an 'ill defined' problem, with not enough information to provide a single solution. I wrote NormalizeScaleGradient to solve it as a 'well defined' problem. Photometry provides the extra information that does the trick. I would recommend setting the ImageIntegration rejection normalization to none because even though the gradient will have been removed, the script can calculate the photometric scale more accurately than the fast algorithm built into ImageIntegration.

Why is 'normalization for rejection' so important? Ideally we want to reject all the hot pixels, cosmic ray strikes and satellite trails without rejecting any of the real data. The real data often looks like noise (shot noise due to the random arrival of photons) but we don't want to reject any of it. With more accurate 'normalization for rejection', we can get closer to the ideal.

The secondary purpose (nice to have, but less important) is to reduce the gradient in the stacked image to the gradient in the best image.

Is there a downside to using NormalizeScaleGradient for all images? Provided that the gradient smoothing is not reduced too much from its default value (keep it above 0.0), the only downside will be the extra processing time. The default level of smoothing is 2.0, which is extremely safe but will still remove the vast majority of the gradient.
 
I just tried this script, but when trying to integrate the output from the script I ran into the following error in ImageIntegration:

*** Error:

*** Error: Incremental file reads have been enabled, but they cannot be performed on this file: /Volumes/Astro/ASI 533/Astro captures/Pinwheel galaxy - M101/WBPP/gradCorrected/gradient_B_Light_2021-04-03_1x1_180sec_-10C_001_c_cc_d_r_nsg.xisf

<* failed *>


This was from ImageIntegration, not the NormalizeScaleGradient script, but since ImageIntegration only fails on the files created by the script, I'm guessing that the script does something to the output files that is different from other processes. That is, feeding the output of StarAlignment directly to ImageIntegration works fine, but the output of StarAlignment to NormalizeScaleGradient, and the output of that to ImageIntegration fails as above. Any clue as to what i might be doing wrong or what setting in PI I need to change?
 
I just tried this script, but when trying to integrate the output from the script I ran into the following error in ImageIntegration:

*** Error:

*** Error: Incremental file reads have been enabled, but they cannot be performed on this file: /Volumes/Astro/ASI 533/Astro captures/Pinwheel galaxy - M101/WBPP/gradCorrected/gradient_B_Light_2021-04-03_1x1_180sec_-10C_001_c_cc_d_r_nsg.xisf

<* failed *>


This was from ImageIntegration, not the NormalizeScaleGradient script, but since ImageIntegration only fails on the files created by the script, I'm guessing that the script does something to the output files that is different from other processes. That is, feeding the output of StarAlignment directly to ImageIntegration works fine, but the output of StarAlignment to NormalizeScaleGradient, and the output of that to ImageIntegration fails as above. Any clue as to what i might be doing wrong or what setting in PI I need to change?
I see the file has a 'gradient_' prefix. These are small images that indicate the gradient that was removed. They should not be input into ImageIntegration! I should probably change the default settings so these files are only created if the user asks for them.
 
Last edited:
Yes to all the above, but I think the problem is mine. For ImageIntegration I loaded all files from the script output directory. I took a look in that directory, and there are a lot of gradient_R*, gradient_G*, and gradient_B* files, each of which in the 300-400 KB range. Looking at them, they appear to be gradient maps, so I'm guesing that they're intermediate files used in the processing?

I re-ran ImageIntegration using just the Light*_nsg files, which are a much more reasonable 109 MB in size. After selecting those files, ImageIntegration ran fine. That should teach me not to blindly do a "select all" when loading files...
 
Version 0.7
User interface bug fix: Auto values in main dialog could display incorrect values in some circumstances. This only affected the user interface. The normalization used the correct values.

I have also added a 'Weight keyword' text field which specifies the name of the FITS header entry used to store the normalized image weight. It defaults to NSGWEIGHT.
 
Version 0.8
I have made a very significant improvement to the 'Weight keyword'. It now bases the weight on noise evaluation. My tests indicate accurate results, even when imaging through light clouds.

Several people have mentioned on this forum that the ImageIntegration 'Noise evaluation' can assign high weights to images that were plagued with clouds. I have also experienced this with my own data. The reason for this is:
(1) PixInsight uses an excellent algorithm for measuring noise. I believe this to be working correctly. However, before the returned noise value can be used, it has to be scaled. For example, suppose you take two identical images, and multiply one of them by 2. The noise has doubled, but the signal to noise remains unchanged. Hence the need to scale the noise value. ImageIntegration understands this, and it calculates a scale factor for the noise estimate. This scale factor is independent of the normalization settings.
(2) ImageIntegration uses a fast and easy to use algorithm to calculate the scale. It is trying to solve an 'ill posed' problem. It can be ambiguous how much of the correction should be done by scaling or offset.
(3) If the image has a larger background level (i.e. imaging through light clouds), the measured noise will depend on how this background level is corrected. If the light pollution is subtracted, the noise measurement will be correct. If it is scaled, the noise measurement can end up being a significant underestimate. So an accurate scale is critical in these situations to determine the noise accurately, and hence the image weight. ImageIntegration usually does extremely well, but it cannot be expected to always get it right in these situations without a much more CPU intensive algorithm.

NormalizeScaleGradient calculates the scale very accurately (at the expense of much CPU ...), and can therefore save an accurate MRS noise based weight keyword in the FITS header.

I use PixInsight's MRS noise evaluation method:
"Estimation of the standard deviation of the Gaussian noise from the multiresolution support."

To use the NWEIGHT keyword, in ImageIntegration, set
Weights: to 'FITS keyword'
Weight keyword: 'NWEIGHT'

I have also added a summary at the end of the batch run:
1621173353224.png


Regards, John Murphy
 
Last edited:
Hi John,
I really like this script. I was introduced to it by Adam Block Studios in the Fundamentals video on Tau Canis Majoris full processing video. I hope you are a subscriber to his valuable library. Thank you Adam.

I often have varying gradients due to light pollution, local light sources, high humidity, air pollution, and some passing clouds. Gradients not so much caused by low altitude. But what can I do? No good site where I can go! I like the new Nweight keyword that will give weights that are not sensitive to my varying background. I am glad it can work with my cooled color CMOS camera. Thanks for your hard work!

I have downloaded and run both Ver 7 and ver8 of NSG. Both have the same (relatively small) issue where the instance of the script will not run.

Background....
I am processing M101 galaxy, and have 100 subframes calibrated, CC, debayered and registered. There is no nebulosity, just M101 and a couple of other smaller galaxies.
Due to lots of required processing time (30 minutes), I first picked a few about 8 images to get the settings (mostly gradient smoothness) best for my bad and varying gradients. It also takes time to carefully examine and manually reject some stars (that were not automatically rejected), and the galaxy core and arms (which are not really visible in one subframe).

After running the trial of 8 subframes I want to re-use the script. I have to exit the script (as is normal with scripts) to see the results in Blink. So I dragged and instance of the script to the PI desktop and renamed it for later adding the 100 subframes.

I can open the instance of the script, but Apply Global returns the error in the Process Console:
... ... NormalizeScaleGradient/lib/NsgData.js/, line 365: Error: Parsing integer expression error: conversion error:
<code>659.20001
Both Script Ver.7 and Ver.8 had this result.

In the opened instance of the script I can see the list of manually rejected stars, so the script is saving all the custom settings I made. Great.

I am not conversant in scripts and have only used a few. Perhaps I did not do something correctly?? Anyway, I hope this can help. I will start my NSG again, and further tune for processing my subs.

Thanks,
Roger
 
Last edited:
Hi John,
I really like this script. I was introduced to it by Adam Block Studios in the Fundamentals video on Tau Canis Majoris full processing video. I hope you are a subscriber to his valuable library. Thank you Adam.

I often have varying gradients due to light pollution, local light sources, high humidity, air pollution, and some passing clouds. Gradients not so much caused by low altitude. But what can I do? No good site where I can go! I like the new Nweight keyword that will give weights that are not sensitive to my varying background. I am glad it can work with my cooled color CMOS camera. Thanks for your hard work!

I have downloaded and run both Ver 7 and ver8 of NSG. Both have the same (relatively small) issue where the instance of the script will not run.

Background....
I am processing M101 galaxy, and have 100 subframes calibrated, CC, debayered and registered. There is no nebulosity, just M101 and a couple of other smaller galaxies.
Due to lots of required processing time (30 minutes), I first picked a few about 8 images to get the settings (mostly gradient smoothness) best for my bad and varying gradients. It also takes time to carefully examine and manually reject some stars (that were not automatically rejected), and the galaxy core and arms (which are not really visible in one subframe).

After running the trial of 8 subframes I want to re-use the script. I have to exit the script (as is normal with scripts) to see the results in Blink. So I dragged and instance of the script to the PI desktop and renamed it for later adding the 100 subframes.

I can open the instance of the script, but Apply Global returns the error in the Process Console:
... ... NormalizeScaleGradient/lib/NsgData.js/, line 365: Error: Parsing integer expression error: conversion error:
<code>659.20001
Both Script Ver.7 and Ver.8 had this result.

In the opened instance of the script I can see the list of manually rejected stars, so the script is saving all the custom settings I made. Great.

I am not conversant in scripts and have only used a few. Perhaps I did not do something correctly?? Anyway, I hope this can help. I will start my NSG again, and further tune for processing my subs.

Thanks,
Roger
Thanks for reporting this.

I am guessing that your focal length is not an integer value. I was not expecting that! I will modify the script to cope with this.

A quick fix you can try until then:
(1) Double click on the process icon.
(2) In the 'Script' dialog, double click on the 'focalLength' entry
(3) Change the 'Value' to an integer and select OK
The process icon should then run.

1621327894589.png


Let me know if this works
Thanks, John Murphy
 
Hi John,
Yes, cancelling the focal length decimals fixed it. Somehow they were in the Fits header.
The script will now global execute and show everything the way I had set up. So this is perfect!
Thanks to you,
Roger
 
Hello John, super script. Your comments on image scale during noise evaluation for integration weights was very interesting. I took 5 quite dim and noisy images each taken over different nights and measured the noise (with std Sn scaling) and then applied ABE with and without normalize selection. The ABE was applied purely for noise measurement, not intended as a source image for integration. The noise values were indeed quite different and would result in different weights in image integration. Do you think this also be a valid way to improve the standard integration weightings.
1621347301233.png
 
Hello John, super script. Your comments on image scale during noise evaluation for integration weights was very interesting. I took 5 quite dim and noisy images each taken over different nights and measured the noise (with std Sn scaling) and then applied ABE with and without normalize selection. The ABE was applied purely for noise measurement, not intended as a source image for integration. The noise values were indeed quite different and would result in different weights in image integration. Do you think this also be a valid way to improve the standard integration weightings.
View attachment 11079
It's an interesting idea, but ...

NormalizeScaleGradient does the following:
  1. Scales the target image to match the reference image scale.
  2. Subtracts the relative gradient from the target image to match the reference image.
  3. Calculates the noise and hence the weight.
To calculate the noise accurately, the critical step is (1). For example, if the applied scale was 2 times bigger than it should be, the noise will be measured to have double the noise.

The order is also important. If the relative gradient was subtracted before the scale was applied, the scale correction would introduce a new gradient that tracks the image brightness. The root cause is that the subtracted relative gradient would be wrong because the scale factor had not yet been applied to the target image.

Using ABE / DBE
If a noise measuring process adjusts the background level by applying a scale factor, and if the scale factor of the images were all similar, and if the images had different background levels, then the noise would be evaluated incorrectly. In this scenario, applying ABE or DBE first may reduce the amount of (invalid) scale applied, which would reduce the inaccuracy. But if the scale of the original images were significantly different, the result could be worse.

Conclusion
For noise evaluation to be accurate, all the images you wish to compare must first be accurately normalized to a reference image (for example, use NormalizeScaleGradient ;)).

Regards, John
 
Last edited:
Version 0.9
This version has the following improvements:
  • The focal length read from the FITS header is rounded to the nearest integer. This fixes a potential problem when executing a ProcessIcon.
  • The 'Auto' calculated sample size has been increased. This helps to reject satellite trails from the surface spline model. It also reduces scatter when plotting sample points in the gradient graph.
  • When displaying a dialog, it now defaults to displaying the selected target image instead of the reference (i.e. the 'Reference' toggle button now defaults to off).
Regards, John Murphy
 
Hi John,
Kindly advise the meaning/ significance of Truncated showing in process console:
M101_-10C_600s_G10Off70-0080IR_c_cc_d_r: NWEIGHT 0.88308 : Image range -0.10485021024942398 to 1, Truncated.
OR
M101_-10C_600s_G10Off70-0015IR_c_cc_d_r: NWEIGHT 1.1297 : Image range 0 to 1.0627018213272095, Truncated

I picked my reference image to be the one with most consistent gradient (one easiest to apply DBE after inegration), not one with a high background.

Approximately 50% of my results indicate Truncated.
Thanks,
Roger
 
Hi John,
Kindly advise the meaning/ significance of Truncated showing in process console:
M101_-10C_600s_G10Off70-0080IR_c_cc_d_r: NWEIGHT 0.88308 : Image range -0.10485021024942398 to 1, Truncated.
OR
M101_-10C_600s_G10Off70-0015IR_c_cc_d_r: NWEIGHT 1.1297 : Image range 0 to 1.0627018213272095, Truncated

I picked my reference image to be the one with most consistent gradient (one easiest to apply DBE after inegration), not one with a high background.

Approximately 50% of my results indicate Truncated.
Thanks,
Roger
The normalized target image exceeds the PixInsight allowed range of 0.0 to 1.0
The script cannot correct this by adding a constant or scaling the image because that would undo the normalization. So the only option the script has is to truncate the image to the 0.0 - 1.0 range.

To fix it:
  • If the target image range is negative (for example, -0.105), you can fix it by using PixelMath to add a pedestal to the reference image. For example add 0.106 You should probably leave it though. See post #41.
  • If the target image range exceeded 1.0, you can either let it truncate (it will probably only affect bright stars), or you can scale the reference image. For example, multiply it by 0.9
Then run NormalizeScaleGradient with the modified reference image and the original target images.

But, there is a snag ...
If you add a constant to a registered image, the black area around the image will no longer be black. This may cause the gradient model to fail. So to add the pedestal to the reference image requires a PixelMath Inline If expression:

iif($T == 0, 0, $T + 0.106)

If the image is equal to zero, set it to zero. If the image is not zero, add 0.106

Hope this helps, John Murphy
 
Last edited:
Hi John,
I understand.
On the less than zero side, I believe this is coming from the black border showing after image alignment. The minimum value in the _nsg image area is well above 0.1.
 
Hi John,
I understand.
On the less than zero side, I believe this is coming from the black border showing after image alignment. The minimum value in the _nsg image area is well above 0.1.
NormalizeScaleGradient does not modify black pixels, so unless there is a bug, it is not coming from the black border.

I believe that the negative values turn up due to cold pixels in your images. For example, when I run the your processIcon with the data you sent me:
Image M101_-10C_600s_G10Off70-0080IR_c_cc_d_r
A cold red pixel at (483, 406) ends up with a negative value that was then truncated. This also happens to other cold pixels in the image.

I used the following PixelMath expression to find it:

iif(M101_10C_600s_G10Off70_0080IR_c_cc_d_r == 0, 0, iif($T == 0, 1, $T))

('$T' target image is the normalized M101_-10C_600s_G10Off70-0080IR_c_cc_d_r_nsg file). Undo / Redo then shows flashing dark to white pixels.

Since the low truncation is happening to cold pixels, it is not a concern.

Regards, John Murphy
 
Last edited:
Status
Not open for further replies.
Back
Top