NormalizeScaleGradient: Bookmark website now!

Status
Not open for further replies.
I am curious about this. First, what are the xx in NOISExx?
The noise estimates for the red, green and blue channel are stored in
NOISE00, NOISE01 and NOISE02

This is a color image and I would have expected if there would be multiple noise estimates.
Does DeBayer change the noise characteristics, and if so would it be better to use noise estimates generated by that process when applicable?
From the ImageCalibration documentation:
Enable CFA
The resulting CFA mode is also used to calculate the noise
Evaluate noise
"ImageCalibration will compute per-channel noise estimates for each target image ... Noise estimates will be stored as NOISExxx FITS header keywords in the output files."

So I was expecting it to produce NOISE00, NOISE01 and NOISE02. Did you have 'Enable CFA' selected? If it wasn't, it would think it was a mono image, and only create NOISE00. I think the bayer pattern would then have a strong influence on the estimated noise. Please let me know if this does not fix the problem so that, if necessary, I can adjust the script.

Ideally we would prefer to have noise estimates from before the debayer process, but I see that there is an 'Evaluate noise' in the Debayer tool, which will produce the NOISE00, NOISE01 and NOISE02 FITS header entries.

Finally, in your chart it seems curious that the two series do not even move in the same direction from frame to frame. For example, the red line decreases from [5] to [6] but the blue line increases. This seems surprising if the cause of the difference is simply registration.
During registration, if the image shift is half a pixel, the registered pixel interpolation will use a equal contribution from the neighboring pixels. This has a maximum noise smoothing effect. If the shift is zero (the registration reference frame), there is no noise smoothing. I assume that a pixel shift of 1/4 will have about half the smoothing effect of the 1/2 pixel shift. If you dither you images, the smoothing effect will depend on the dither shift.

John Murphy
 
By getting noise estimates from the images before registration, this false variation is removed. We want the script to be very sensitive to sky conditions, but not sensitive to the registration process.
John,
This is a great explanation. Thanks to you and Juan for your constant improvements!
Roger
 
From the ImageCalibration documentation:
Enable CFA
The resulting CFA mode is also used to calculate the noise
Evaluate noise
"ImageCalibration will compute per-channel noise estimates for each target image ... Noise estimates will be stored as NOISExxx FITS header keywords in the output files."

So I was expecting it to produce NOISE00, NOISE01 and NOISE02. Did you have 'Enable CFA' selected? If it wasn't, it would think it was a mono image, and only create NOISE00. I think the bayer pattern would then have a strong influence on the estimated noise. Please let me know if this does not fix the problem so that, if necessary, I can adjust the script.

Thanks for your explanations, John. It all makes sense.

Regarding the ImageCalibration of CFA images, I did indeed have "Enable CFA" selected. I just did it again to confirm: only a single noise estimate is calculated. As this is contrary to the documentation maybe it is a bug in ImageCalibration? @Juan Conejero
 
Regarding the ImageCalibration of CFA images, I did indeed have "Enable CFA" selected. I just did it again to confirm: only a single noise estimate is calculated. As this is contrary to the documentation maybe it is a bug in ImageCalibration?
I have had a look at the ImageCalibration code. For color images it bins the bayer array before calculating the noise. This produces a valid noise estimate.

I have updated NormalizeScaleGradient to use NOISE00 for all color channels if NOISE01 and NOISE02 don't exist.
I add a warning message to the 'Summary' if NOISE00 does not exist.

1626684461695.png


John Murphy
 

Attachments

  • NormalizeScaleGradient.zip
    104.1 KB · Views: 89
Last edited:
Please note that noise estimates should always be computed by the Debayer process for CFA raw data, not by ImageCalibration. Debayer is able to compute separate per-channel noise estimates by extracting individual CFA components from the raw frame (hence NOISE00, NOISE01 and NOISE02 will be present in the demosaiced image). However, ImageCalibration will compute a single noise estimate from all raw CFA pixels (thus only NOISE00 will be present). The WBPP script disables noise estimation in ImageCalibration for CFA data, and enables it in Debayer. If you reduce your data manually, you should do the same unless you have a very good reason (what reason?) to work with single noise estimates.
 
Last edited:
By the way, the tooltip information for the evaluate noise parameter in the Debayer tool is wrong. It wrongly states that noise estimates are computed from demosaiced (hence interpolated) data, which is not true. They are computed from raw CFA pixel sample values. This will be fixed in the next version.
 
Please note that noise estimates should always be computed by the Debayer process for CFA raw data, not by ImageCalibration. Debayer is able to compute separate per-channel noise estimates by extracting individual CFA components from the raw frame (hence NOISE00, NOISE01 and NOISE03 will be present in the demosaiced image). However, ImageCalibration will compute a single noise estimate from all raw CFA pixels (thus only NOISE00 will be present). The WBPP script disables noise estimation in ImageCalibration for CFA data, and enables it in Debayer. If you reduce your data manually, you should do the same unless you have a very good reason (what reason?) to work with single noise estimates.
Thanks for this extremely useful advice. I will modify the warning message to specify the Debayer process instead of ImageCalibration for color images
 
Another small bit of information to complete your understanding of how noise is evaluated for raw CFA frames. For CFA components represented by more than one pixel (green in Bayer patterns and all channels in the case of X-Trans patterns), noise estimates are computed from the average of all components corresponding to the same color for each CFA group of raw pixels. This means that the estimate for the green channel is computed from the average of two green pixels for each instance of a Bayer pattern.
 
Hi John,
1. When I did the comparison of different weighting parameters in my post 163 above (page9) I was using NSG script Ver1.1. This was before you corrected the noise factor needing to be squared. Do you think there would be significant difference that I should re-do this work? ie relative NWEIGHTS significantly different? Or the individual .NSG images significantly different?

2. I now notice my focal length was wrong 659mm vs 578mm (correct). My .81 reducer was missing. Any significant affect for this?

2. Today I tried to open the copy of the script I saved in my saved project. It was Ver1.1. Today it would not open (latest script is Ver 1.3). I get this error in Process Console: "Source code MD5 checksum mismatch..." Is this normal when a script is updated?

Suggestion: When I drag the instance of the script to the PI desktop and later open it (before global execution) it would be helpful for the Version of the script to show up in the listing.
Roger
 
2. Today I tried to open the copy of the script I saved in my saved project. It was Ver1.1. Today it would not open (latest script is Ver 1.3). I get this error in Process Console: "Source code MD5 checksum mismatch..." Is this normal when a script is updated?

Suggestion: When I drag the instance of the script to the PI desktop and later open it (before global execution) it would be helpful for the Version of the script to show up in the listing.
Roger
Delete the MD5 checksum. The process icon will then work.
 
The focal length is only used to determine the defaults. A small change will make little difference.
 
1.) When I did the comparison of different weighting parameters in my post 163 above (page9) I was using NSG script Ver1.1. This was before you corrected the noise factor needing to be squared. Do you think there would be significant difference that I should re-do this work? ie relative NWEIGHTS significantly different? Or the individual .NSG images significantly different?

John,
Thanks for your other answers.
I am going to re-run the integration with the latest script. No need to try answer my 1.) question. It is history.
Roger
 
John,
Thanks for your other answers.
I am going to re-run the integration with the latest script. No need to try answer my 1.) question. It is history.
Roger
Hi John,
I really like all the visual plots, excellent tool tips and documentation for your script. Sometimes I forget to 'get on with it' and remember that I am going to create a nice image after my post processing. I get lost playing with the great script tools!

I did rerun the files with Ver1.3 Script, and then integrated using NWEIGHT. I compared it to other ways to weight integrating images.
My conclusions:
  1. The script did does its job and makes the 100 .nsg images' gradients much more consistent from image to image.
  2. The version 1.3 script is more sensitive NWEIGHTs (range of .60 to 3.55 vs .26 to 1.60)
  3. The integrated image with NWEIGHT ver1.3 did not look any better or worse than with other weighting factors.
    • I created 4 integrated images with different methods of weighting. (weightings shown in attached .zip Excel file).
    • I was surprised there was not a winner, because making a good image more important (higher NWEIGHT) vs weaker ones should result in a better integrated image. Adam convinced me of this in his videos!
    • There is really no 'tool' to quantify which image is best.
    • I don't know which image I will invest the time to do post processing.
  4. Integrating all images looked better (less noise visually) than just integrating the 'above 50% NWEIGHT' range. I had 100 images, which is a lot, and throwing out about half of them gave a noisier image. I think users should use every image that is not truly defective.
Appreciate any questions/comments.
Roger
 

Attachments

  • Summary of M101 NSG and other weighting factors.zip
    19.7 KB · Views: 70
  1. Integrating all images looked better (less noise visually) than just integrating the 'above 50% NWEIGHT' range. I had 100 images, which is a lot, and throwing out about half of them gave a noisier image. I think users should use every image that is not truly defective.
The 50% NWEIGHT is not one of the scripts 'Auto' values. It is just an arbitrary default value.
I see that you changed this to suit your data set. This is exactly the right thing to do.

General advice:
If there are only a few images with significantly lower weights, these images were probably heavily affected by clouds compared to the others. It may then be worth rejecting them. However, if most of your images are affected by cloud, using all of them is probably the right thing to do.
 
I really like all the visual plots, excellent tool tips and documentation for your script. Sometimes I forget to 'get on with it' and remember that I am going to create a nice image after my post processing. I get lost playing with the great script tools!
:)
 
Hi,

I had the opportunity to test NSG in a set of data which has one night shot under a Bortle 4 sky and two nights under a Bortle 8. When running NSG, the reference frame selected was one from B4 sky but that lead to the rejection of all B8 sky photos in integration. I manually override it and proceed but then I suppose the weight assigned to those photos is much lower than to B4 sky. Would it be better to choose a B8 frame as reference instead?
What would be the best strategy to follow in situations like this (besides just shooting in B4 sky?... :D)

Attached the median of the frames is shown - the first 30 photos are B4 sky ones (clearly... :)).

Any thoughts on this are welcome.

Thanks,
André
 

Attachments

  • Capture.jpg
    Capture.jpg
    47.1 KB · Views: 59
I had the opportunity to test NSG in a set of data which has one night shot under a Bortle 4 sky and two nights under a Bortle 8. When running NSG, the reference frame selected was one from B4 sky but that lead to the rejection of all B8 sky photos in integration. I manually override it and proceed but then I suppose the weight assigned to those photos is much lower than to B4 sky. Would it be better to choose a B8 frame as reference instead?
What would be the best strategy to follow in situations like this (besides just shooting in B4 sky?... :D)
  1. Choose a reference frame with the least complex gradient. This will make it easier to do the final DBE correction on the stacked image.
  2. The image rejection percentage is based on the highest weight. Note that this is not necessarily the reference frame, so the choice of reference frame is not relevant to image rejection. To include more images, you need to reduce the 'Minimum weight %' parameter.
I have reduced the 'Minimum weight %' default from 50% to 25%, since a few people have found 50% to be too restrictive. This will be delivered as a PixInsight update soon.

Regards, John
 
  1. Choose a reference frame with the least complex gradient. This will make it easier to do the final DBE correction on the stacked image.
  2. The image rejection percentage is based on the highest weight. Note that this is not necessarily the reference frame, so the choice of reference frame is not relevant to image rejection. To include more images, you need to reduce the 'Minimum weight %' parameter.
I have reduced the 'Minimum weight %' default from 50% to 25%, since a few people have found 50% to be too restrictive. This will be delivered as a PixInsight update soon.

Regards, John
Hi John,

Thanks for the feedback.
By the way, I'll take the oportunity to ask one thing I am wondering. Why do you recommend to process OSC NB images per channel whilst for simple OSC to do it at once?

Thanks,
André
 
Why do you recommend to process OSC NB images per channel whilst for simple OSC to do it at once?
If you split an OSC before processing, this gives a bit more control. For example, ImageIntegration will then use different weights for each channel instead of an average. However, the benefits are small, so the convenience of processing the OSC without splitting it is probably worth while.

OSC NB images have an extra problem though. If the narrow band filter is very good at blocking out of pass band light, one or more channels may only contain noise. I have not tested this scenario, so I am not sure how these 'dead' channels would be scaled. The average image weight might be significantly affected by these dead channels.

However, NSG now writes three extra weights to the FITS header:
red: NWEIGHT0, green: NWEIGHT1, blue: NWEIGHT2
If you modify the ImageIntegration settings to use the appropriate weight for the 'live' channel, you could probably process it in one go. Personally, I would still split it into separate channels though.

Regards, John Murphy
 
Status
Not open for further replies.
Back
Top