NormalizeScaleGradient 1.4.3

jmurphy

PTeam Member
Jun 13, 2010
306
141
Basingstoke, England
No errors that I can see...just warnings on overwriting processed files...and the fact that there are no real corrections going on for normalization as background pretty clean but still can use the weights and those seem good. I knew ImageIntegration only opened up after exiting but no luck on that so far.
Something strange is happening. The console output includes the text NormalizeScaleGradient V1.0
I wonder if you have a mixture of v1.0 and v1.1 JavaScript source files in the NormalizeScaleGradient folder (and its sub folders)?
 

johnpane

Well-known member
Jan 13, 2015
168
8
57
Wexford, PA, USA
John, will NSG work in a CFA Drizzle workflow for one-shot color images? In that workflow the CFA images are deBayered and go through the normalization, pixel rejection, and weighted integration as an interim step, during which information is saved for re-use in DrizzleIntegration. DrizzleIntegration then uses the original CFA files as inputs. It is not clear to me whether or how the normalization of the subframes would carry through to the DrizzleIntegration part of the process. Can you clarify if it does? If not, do you see that as a feasible extension?
 
Last edited:

jmurphy

PTeam Member
Jun 13, 2010
306
141
Basingstoke, England
John, will NSG work in a CFA Drizzle workflow for one-shot color images? In that workflow the CFA images are deBayered and go through the normalization, pixel rejection, and weighted integration as an interim step, during which information is saved for re-use in DrizzleIntegration. DrizzleIntegration then uses the original CFA files as inputs. It is not clear to me whether or how the normalization of the subframes would carry through to the DrizzleIntegration part of the process. Can you clarify if it does? If not, do you see that as a feasible extension?
NSG is not currently compatible with drizzle. To make it compatible, NSG needs to output .xnml files. This is not easily done from JavaScript. Hence, this functionality will have to wait until I have ported NSG to C++. That needs doing anyway for performance reasons.

Is there a way of getting the current script to work with drizzle? The short answer is no, there is no good way of doing this. I have not tried it, but there might be a very hacky way of doing it. This hack (if indeed it works) would require mono files, so you would need to extract the R, G, B from your color camera.
  • If color, extract R, G, B from CFA
  • If there is a meridian flip, use 'FastRotation' to rotate the images by 180 degrees either before or after the flip.
  • Run NSG on the unregistered files. To make this work, increase the 'Photometry Star Search' -> 'Star search radius' to its maximum (20). Provided the shift between images is less than this search window, it should still manage to match the ref / tgt stars. Reduce the 'Star flux tolerance' to reduce the risk of invalid star matches. Try 1.2
  • Use the 'Photometry stars' dialog to check that the ref / tgt stars have been correctly matched. Test on the image with the greatest shift from the chosen reference image.
  • Increase the sample size to 1.5 times the 'Auto' default size.
  • Only use smooth gradient corrections. The default of 2 should work well. Don't use less than 1.0
  • Deselect 'ImageIntegration'.
Then, do your drizzle processing as normal, starting by registering the _nsg files with the 'Generate drizzle data' selected. Make sure you set up the ImageIntegration process with 'No normalization' and NWEIGHT. Note that DrizzleIntegration will then go back to the unregistered, but scale and gradient corrected, _nsg files.

As you can see, it is a very hacky solution... It is probably better to wait for the C++ version that will do the job properly.

If anyone tries this, do let me know if it works!
 
Last edited:
  • Like
Reactions: johnpane

jmurphy

PTeam Member
Jun 13, 2010
306
141
Basingstoke, England
This new version (Beta5) displays the reference image's filter name.
If a target image filter name does not match, that row is shown in red.
The console summary will now also display warnings if one or more filters did not match.
There are also a few very minor bug fixes.

[Beta6 attached - I have improved error handling if target files cannot be read. Unless errors are found, this is the final version 1.1]

[The script has now been released and is available as an update, with updated documentation.]
 
Last edited:

KevinB

New member
Apr 11, 2020
3
0
Something strange is happening. The console output includes the text NormalizeScaleGradient V1.0
I wonder if you have a mixture of v1.0 and v1.1 JavaScript source files in the NormalizeScaleGradient folder (and its sub folders)?
Thanks John,

Your reply pointed me in right direction...original install must have messed up somehow...I removed the script and reinstalled it and all is well now...

Kevin
 

jmurphy

PTeam Member
Jun 13, 2010
306
141
Basingstoke, England
does the script work for uneven gradients from clouds?
Yes, it will help to reduce their impact.
Clouds affect the image in two ways:
  1. Reflect light pollution
  2. Reduce transmission
NSG currently assumes that the transmission is reduced evenly for the whole image. This will be true if the clouds move across the frame. Even when they don't, the variation in transmission usually does not cause too much trouble. Later versions will provide the option to vary the brightness scale across the image.

NSG uses some fantastic PixInsight methods to create a surface spline that models the relative gradient. This is very effective at reducing the effect of the reflected light pollution.

NSG also does a great job of calculating the image weight based on the detected noise level and scale factor. This works extremely well, even if the images suffered from significant cloud.

Regards
John
 
  • Like
Reactions: dpastern

niccoc1603

Well-known member
Mar 23, 2019
55
3
thank you, I have another question, the default "minimum weight" value for ImageIntegration of 50% seems a bit too restrictive ?
 

jmurphy

PTeam Member
Jun 13, 2010
306
141
Basingstoke, England
thank you, I have another question, the default "minimum weight" value for ImageIntegration of 50% seems a bit too restrictive ?
It is a very arbitrary value! Feel free to modify it to anything from 0% (enable all images) and higher.

The 50% minimum weight means that any image with a weight less than half the weight of the best image will be disabled in ImageIntegration. On datasets that didn't suffer from clouds, this usually results in all images being enabled in ImageIntegration.

Provided ImageIntegration is set up to use NWEIGHT (the script sets this up for you), the poor images will be assigned the low NWEIGHT within ImageIntegration, so including all images will probably do little harm and might help.
 
Last edited:
  • Like
Reactions: niccoc1603

niccoc1603

Well-known member
Mar 23, 2019
55
3
but...should NSG normalize and uniform clouds gradient? So the rejection should really be for those subs that are unusable due to a severe cloud coverage (which I usually manually discard with blink) if I am interpreting this correctly.

I am trying the script on a set of L subs affected by passing clouds and the results are spectacular, can't wait for NSG to support dither and multiple thread processing
 

jmurphy

PTeam Member
Jun 13, 2010
306
141
Basingstoke, England
So the rejection should really be for those subs that are unusable due to a severe cloud coverage (which I usually manually discard with blink) if I am interpreting this correctly.

I am trying the script on a set of L subs affected by passing clouds and the results are spectacular, can't wait for NSG to support dither and multiple thread processing
The addition of the minimum weight field to the script was one of the many great suggestions I received from Adam Block. It is relative to the exposure time. For example, if the best frame is a 600 s exposure and has a weight of 1.0, and 'Minimum weight %' is set to 50%, then a 600 s exposure will be rejected if its weight is less than 0.5, but a 300 s exposures will only be rejected if its weight is less than 0.25

Yes, I personally use the minimum weight to reject images that were too heavily affected by cloud.

For optimum results, the user should adjust the 'Gradient smoothness' and 'Minimum weight %' to suit their data set.

John Murphy
 

desmc

New member
Jun 21, 2021
2
0
Thanks for all of your hard work one this.
One quick question - apologies if it has already been addressed - can subs with different exposure lengths be processed and then
integrated in one go?
Des
 

JST200

Well-known member
Sep 29, 2018
48
2
Hi John,

Apologies if this has been covered in the posts above. I did skim through them but couldn't see it.

Ivo Stoynov (the APT developer) has agreed to add the relevant Altitude header in the FITS files, created by APT, for use in this script.

Can I just check the exact name of the header, please?

Is it just "Altitude", "OBJCTALT" (and, although not relevant to NormalizeScaleGradient, "OBJCTAZ"?) or something else?

Thanks, Jim 🙂
 

jmurphy

PTeam Member
Jun 13, 2010
306
141
Basingstoke, England
Ivo Stoynov (the APT developer) has agreed to add the relevant Altitude header in the FITS files, created by APT, for use in this script.
Can I just check the exact name of the header, please?
Is it just "Altitude", "OBJCTALT" (and, although not relevant to NormalizeScaleGradient, "OBJCTAZ"?) or something else?
The script looks for "OBJCTALT".
If that does not exist, it looks for "CENTALT".
I should probably add "ALTITUDE" as an extra alternative in a future release.

The other keywords I use are:
Exposure: "EXPOSURE", "EXPTIME", "ELAPTIME" or "TELAPSE"
Focal length: "FOCALLEN"
Pixel size: "XPIXSZ"
Filter: "FILTER"

I don't use azimuth, but I think "OBJCTAZ" and "AZIMUTH" are common.
Regards, John Murphy
 
Last edited:
  • Like
Reactions: dpastern

jmurphy

PTeam Member
Jun 13, 2010
306
141
Basingstoke, England
One quick question - apologies if it has already been addressed - can subs with different exposure lengths be processed and then integrated in one go?
Yes. For example, if you took a 600 s and a 300 s exposure in identical conditions, the 300 s exposure would be assigned half the weight. ImageIntegration would then combine these images optimally, based on the weight.
If you want to preserve the full dynamic range, you may need to divide your reference frame by a constant to provide the extra headroom.

You can test this: Take two exposures of the same length taken in very similar conditions. Use PixelMath to add (or even better, average) them together. This simulates an exposure of twice the length. You should find that the combined image should have approximately double the weight compared to the originals. Try dividing an image by 2. This should not change the weight.

Regards, John Murphy
 
Last edited:
  • Like
Reactions: desmc

jmurphy

PTeam Member
Jun 13, 2010
306
141
Basingstoke, England
Hi John,
Thanks for your hard work, and very useable script!
This post is a continuation of the work started in my Post #31 on P2 of this thread.
I did a little study by comparing Image Integration via NSG script, vs the standard way of Image Integration with Noise Evaluation for weighting.
Attached is a file with several charts.
This data is based on high and varying background level data (659mm focal length) of M101. This where the NSG script should shine.
I wanted to see the correlation between the NWEIGHT and other factors when integrating without using NSG script.
Weight in normal integration (no NSG) was from the process console. At this time I have extracted only 40 frames of data from the process console.
I used Subframe Selector to get MAD, stars detected, and median from the frames used in normal integration.
I used same reference image for NSG script, integration after NSG script, and for normal integration.

In the end I asked myself:
1. Do registered individual images that look bad correlate better to NWEIGHT or to Weight from normal integration?
My answer; it varies:
-- All images that look bad have low NWEIGHT, but also some of these bad images also have low Weight from normal image integration.
-- No images with that look bad have high NWEIGHT. So NWEIGHT does not let me down.

2. Is the image after image integration looked better with NWEIGHT vs normal integration weights (no NSG)?
My answers:
-- The background gradient is better (simpler gradient to fix in DBE) using NSG script.
-- The normally integrated galaxy color is different (more blue after unlinked stretch). The NSG integrated galaxy looks brighter, but the readout data does not show any difference (out to 3 decimal places).
It is hard for me to say if the image quality is significantly different.
-- The edge goes to NWEIGHT integration.
-- Perhaps the final image quality is not significantly different if the weights are not exactly correct.

Any comments/questions/next steps I might do?

Thanks,
Roger
Hi Roger. You have produced some interesting data.
Were you using NormalizeScaleGradient 1.1 ? The new version calculates the weights more accurately. Thanks to Adam Block for spotting the error in version 1.0

Image weights should depend on the exposure time, the amount of light pollution, and how much light was absorbed by the atmosphere / cloud.

If the dominant problem is light pollution, I would expect a strong correlation with the image median level. On the other hand, if the dominant problem was absorption, the correlation with the median would be low. There should be a strong linear correlation between exposure time and NWEIGHT (provided the conditions stay the same).
Regards, John
 

desmc

New member
Jun 21, 2021
2
0
Yes. For example, if you took a 600 s and a 300 s exposure in identical conditions, the 300 s exposure would be assigned half the weight. ImageIntegration would then combine these images optimally, based on the weight.
If you want to preserve the full dynamic range, you may need to divide your reference frame by a constant to provide the extra headroom.

You can test this: Take two exposures of the same length taken in very similar conditions. Use PixelMath to add (or even better, average) them together. This simulates an exposure of twice the length. You should find that the combined image should have approximately double the weight compared to the originals. Try dividing an image by 2. This should not change the weight.

Regards, John Murphy
Thanks. That's very clear. Best, Des
 

jmurphy

PTeam Member
Jun 13, 2010
306
141
Basingstoke, England
NormalizeScaleGradient 1.1.a

You only need this version if your FITS headers only store the altitude in an "ALTITUDE" keyword.
It checks for "OBJCTALT", then "CENTALT" and then finally "ALTITUDE", returning the first entry it finds.

[edit NormalizeScaleGradient 1.2 has now been released]
 
Last edited: