NormalizeScaleGradient: Bookmark website now!

Status
Not open for further replies.
Hi John,

Thanks for your prompt reply. I did try it as a single process and discovered it didn't work! ?

Another question if I may?

When you say filter, I guess you mean RGB?

I've tried NSG using 2 night's OSC sessions, one with no filter and one with a lightpro filter. My first attempt using the OSC subs and the second attempt using the Debayer produced seperate RGB channel subs. In both cases it seems to work beautifully. Is that what you would expect?

Thanks, Jim
 
When you say filter, I guess you mean RGB?
The only thing that matters is the color (or frequency band) of light that reaches the camera pixels. Obviously it would make no sense to normalize a red light image to a blue one, or Ha to OIII. When splitting an OSC sub into separate mono channels, each of these mono channels was created by the red, green and blue filters within the camera's bayer array. Hence the same rule applies.
 
John, I'm scratching my head over this one and hope you can quickly see what I'm doing wrong.

Running NSG on some H-a images and I get a lot of Truncated messages, ranging from 1.037 to 2.252. So I used PMath to divide the reference image by 2.52, and verified it had worked using Statistics. I then saved the modified reference image.

However when I re-run NSG, I get exactly the same Truncated messages!

Reloading the reference image I re-confirmed that its values have been divided by 2.252.

One other thing: if I save the image under a different name I get prompted by NSG for the XPIXSZ and FOCALLEN keywords. Don't know if this is relevant.

I'm probably doing something obvious but just can't see it. Am I mis-reading the Pre-requisites info?

Thanks
Peter.
 
One other thing: if I save the image under a different name I get prompted by NSG for the XPIXSZ and FOCALLEN keywords. Don't know if this is relevant.
This would appear to indicate that these FITS headers have not been saved by PixelMath. This is very strange!

Running NSG on some H-a images and I get a lot of Truncated messages, ranging from 1.037 to 2.252. So I used PMath to divide the reference image by 2.52, and verified it had worked using Statistics. I then saved the modified reference image.

However when I re-run NSG, I get exactly the same Truncated messages!
Send a link to the original reference and a single target image that demonstrates the problem.
 
Last edited:
In preparing the images to send to you, I've found what was wrong. It seems that after applying Pmath to the reference image, it didn't save properly. By which I mean that using Save didn't work - I had to use Save As and replace the image file on disk.
So when I was running NSG it retrieved the unaltered file from disk.
With that corrected, NSG ran with no truncated messages.

Also I suspect that the problem with the missing FITS keywords was that I used the Create New option in Pmath, but I haven't confirmed that yet. Instead I took a copy of the original reference file and updated the original in place, and that problem went away.

Peter.
 
Having taken care of the truncated frames by dividing the reference frame by some factor, I'm finding that when it comes to Image Integration, the reference frame itself is sometimes being rejected.
Is this expected behaviour, or am I just lousy at picking the 'best' frame to use as the reference?

Thanks
Peter.
 
Having taken care of the truncated frames by dividing the reference frame by some factor, I'm finding that when it comes to Image Integration, the reference frame itself is sometimes being rejected.
Is this expected behaviour?
After dividing the reference frame by a factor, the stars that were saturated will have a lower maximum ADU, but will still have a flat top. These stars might not be saturated at all in some of the target frames. Integration rejection will spot this difference and reject pixels. If there are less of the bright frames, the saturated flat tops will be rejected and replaced by the less saturated star data from the other frames. An automatic HDR! :) Hence it is advantageous in this situation for the cores of bright stars in the reference frame to be rejected.

If there are more bright frames, then the rejection will work in the other direction. The unsaturated peaks would then be replaced by the saturated star data :( In this case, dividing the reference frame by a factor has made no difference. If the dynamic range of the stars is important, you could create two stacks, with and without rejection. Then replace the stars in the 'stack with rejection' with the stars from the other stack. However, you usually don't need to worry too much about saturated stars.

Hope this is clear, John Murphy
 
I've been looking for any documentation on the current version of NSG with no luck. Or any documentation on it really - I don't find it mentioned or listing in any of the PixInsight reference documents. Can someone provide a link?
 
I've been looking for any documentation on the current version of NSG with no luck. Or any documentation on it really - I don't find it mentioned or listing in any of the PixInsight reference documents. Can someone provide a link?
I would recommend watching Adam Block's video on NormalizeScaleGradient (it's in 3 parts):

The reference documentation for NormalizeScaleGradient 1.4.2 should already be installed (PixInsight 1.8.8-9). Access it by using the 'page with a bent corner' tool button at the bottom left of the script dialog:
1632253004569.png


If this does not work, you can force a complete reinstall of all installed scripts by running PixInsight with the --default-scripts command line argument:

Linux:
PixInsight --default-scripts

macOS:
/Applications/PixInsight/PixInsight.app/Contents/MacOS/PixInsight --default-scripts

Windows:
"C:\Program Files\PixInsight\bin\PixInsight.exe" --default-scripts

John Murphy
 
Last edited:
I would recommend watching Adam Block's video on NormalizeScaleGradient (it's in 3 parts):

The reference documentation for NormalizeScaleGradient 1.4.2 should already be installed (PixInsight 1.8.8-9). Access it by using the 'page with a bent corner' tool button at the bottom left of the script dialog:
View attachment 12141

If this does not work, you can force a complete reinstall of all installed scripts by running PixInsight with the --default-scripts command line argument:

Linux:
PixInsight --default-scripts

macOS:
/Applications/PixInsight/PixInsight.app/Contents/MacOS/PixInsight --default-scripts

Windows:
"C:\Program Files\PixInsight\bin\PixInsight.exe" --default-scripts

John Murphy
I have watched them, but the issue (minor I know) is that his video covers v1.0. There are some selection/choice difference in v1.4.2 in the Integration and I wanted to understand what, if any, impact/significance of the choices are. I guess I don't see how to call up the script specific internal help file; I'll have to look again once NSG finishes its current run (I've been running it again and again to test/understand the impact of the Integration weighting slider. The frames it excluded clued me in to an issue I really hadn't appreciated - the significant difference in SNR that exists due to multi-night sessions/changes in weather and pre/post meridian. I don't believe that the WBPP integration excluded those frames, but there is a subset that NSG is trying to drop out. I can visually see in blink why they might drop out (much darker), but at the same time I'm trying to understand why they seem to uniformly occur after the meridian flip. I'm about to just let it go and let NSG exclude what it wants and see how the final image turns out compared to the 'normal' WBPP result.
 
Last edited:
I would recommend watching Adam Block's video on NormalizeScaleGradient (it's in 3 parts):

The reference documentation for NormalizeScaleGradient 1.4.2 should already be installed (PixInsight 1.8.8-9). Access it by using the 'page with a bent corner' tool button at the bottom left of the script dialog:
View attachment 12141

If this does not work, you can force a complete reinstall of all installed scripts by running PixInsight with the --default-scripts command line argument:

Linux:
PixInsight --default-scripts

macOS:
/Applications/PixInsight/PixInsight.app/Contents/MacOS/PixInsight --default-scripts

Windows:
"C:\Program Files\PixInsight\bin\PixInsight.exe" --default-scripts

John Murphy
John,
One additional question. Should I let WBPP generate the drizzle data or do it in the NSG integration? (Integration failed due to a path error when I checked the Generate Drizzle box after NSG, not sure what path or where the path is that it is missing).
 
Also, I'm not seeing anything in the gradient slope, at least nothing that resembles the videos. I see a flat horizontal line - but don't know enough to know if that is an issue. It does not change regardless of zoom level or curve selection.
Capture.JPG
 
I found the icon you indicated. In reading through the help file, the note concerning the usage with color cameras with narrow band filters raises some questions (assuming you mean OSC cameras and filters like the Optolong L'Extreme). It states that the registered OSC files should all be split into their respective R, G, B images and processed separately in NSG. While the new WBPP offers that option, it seems that it should be able to work with the un-split images given the following instruction: But it then states RGB images can be processed by NSG.

Does this mean if no filter (or only an LPR) filter is used with an OSC, no color channel separation is required, otherwise and OSC+NB filter should split channels and process like a monochrome imager? For one target I have that would be almost 400 frames to be processed @50mb per frame.
 
Last edited:
Also, I'm not seeing anything in the gradient slope, at least nothing that resembles the videos. I see a flat horizontal line - but don't know enough to know if that is an issue. It does not change regardless of zoom level or curve selection.
View attachment 12142
I notice the only difference in the reference and target filename is a postfix of '1'.
Are these two files identical? If so, this would explain the zero relative gradient between them.
 
I found the icon you indicated. In reading through the help file, the note concerning the usage with color cameras with narrow band filters raises some questions (assuming you mean OSC cameras and filters like the Optolong L'Extreme). It states that the registered OSC files should all be split into their respective R, G, B images and processed separately in NSG. While the new WBPP offers that option, it seems that it should be able to work with the un-split images given the following instruction: But it then states RGB images can be processed by NSG.

Does this mean if no filter (or only an LPR) filter is used with an OSC, no color channel separation is required, otherwise and OSC+NB filter should split channels and process like a monochrome imager? For one target I have that would be almost 400 frames to be processed @50mb per frame.
If using a narrow band filter with an OSC camera, some of the color channels may end up having little or no data. For example, a Ha filter would result in no data in the green and blue channel. This could cause NSG trouble, so in these cases it is necessary to split the RGB image into separate channels. If a filter is used that does not create dead channels, NSG can then process the RGB image.
 
John,
One additional question. Should I let WBPP generate the drizzle data or do it in the NSG integration? (Integration failed due to a path error when I checked the Generate Drizzle box after NSG, not sure what path or where the path is that it is missing).
See this post for more information:
 
... I've been running it again and again to test/understand the impact of the Integration weighting slider. The frames it excluded clued me in to an issue I really hadn't appreciated - the significant difference in SNR that exists due to multi-night sessions/changes in weather and pre/post meridian. I don't believe that the WBPP integration excluded those frames, but there is a subset that NSG is trying to drop out. I can visually see in blink why they might drop out (much darker), but at the same time I'm trying to understand why they seem to uniformly occur after the meridian flip. I'm about to just let it go and let NSG exclude what it wants and see how the final image turns out compared to the 'normal' WBPP result.
I would not worry too much about the 'Minimum weight %' slider. All it does is decide which images are enabled in ImageIntegration. Run NSG once and save the ImageIntegration process as a process icon. In ImageIntegration, you can then try enabling / disabling the images with the worst weights and see if it makes any difference.

It probably wont make much difference because the worst images are given a very small weight.
 
Last edited:
I notice the only difference in the reference and target filename is a postfix of '1'.
Are these two files identical? If so, this would explain the zero relative gradient between them.
There were/are 47 frames in the 'stack' to be processed by NSG in that screen capture, I picked one with an altitude of approx 80 for the reference - NSG indicated there were over 3000 points in the gradient 'curve' when I played with the gradient smoothness selection. I did have the WBPP create the RGB channels for both panes of the mosaic and pulled just the R frames from Pane_1 into NSG in a later test run. In that set of R images there was a red line visible in the gradient curve but it was still horizontal/flat. I notice in both the OSC images and in the extracted R images, the photometry graph is the diagonal you expect but there are no outliers/spread from the diagonal. Not sure what this is telling me about the star selection within NSG + my frames. I guess that means that the frames are good / star selection choices available are good quality?

I did complete the process through image integration for the OSC image and compared it to the integrated image for the same mosaic pane created by WBPP. I was surprised that there was more visible gradient across the frame from the NSG image than the WBPP integration as I was not expecting that result. I considered running a test with the R, G, B, frames through NSG then recombined but decided, at least for this target set, the potential improvement was too little for the effort it appeared to require.

The situation would / will be different if I have an image set that has a bad set of gradients in it, but so far have not had that situation in the few observing nights that have been available here since April. But then, I get pretty aggressive about cutting out frames in the Blink process which may be preventing or at least reducing some of the issues NSG is targeted at. I just don't know. I do believe NSG is a powerful tool where it is needed/necessary and I'm very glad it is available.
 
I would not worry too much about the 'Minimum weight %' slider. All it does is decide which images are enabled in ImageIntegration. Run NSG once and save the ImageIntegration process as a process icon. In ImageIntegration, you can then try enabling / disabling the images with the worst weights and see if it makes any difference.

It probably wont make much difference because the worst images are given a very small weight.
In the PANE_1 set of frames (a two pane mosaic of The Veil Nebula), at 20% NSG dropped out the last four frames (all after the meridian flip) all of which had a significantly lower weight than the others in the set. It didn't drop any from PANE_2 image set, all other settings being the same, reference frame selected at the same approximate elevation.

I did drop out several frames from both mosaic Panes in the Blink process due to clouds intruding into the image. It was also close to the full moon at the time of the imaging; I don't know if that is why the general sky illumination level dropped after the meridian flip (combined with the L'Extreme filter) or not. Not an issue with NSG, I believe its frame weighting is just making apparent something that occurs with this filter and its orientation vs the moon - which is also visible in Blink.
 
See this post for more information:
Thanks, the discussion answered my questions and was of interest itself. I don't always use drizzle, it depends on how critical stars are to the image/the need to reduce pixelization effects. It does make a small but noticeable improvement when used.
 
Status
Not open for further replies.
Back
Top