WBPP diagnostic message

I don't think I misunderstood anything, too old, and wise, for that. I'll just leave it at that.

So, what do you think is the reason for your questioning of the 10.9 hours in a Bortle 3? What are you seeing, or not seeing, that makes you think there's a lack of data, or the data is weak?

no, you really have misunderstood my motivation. i'm only saying that something seems wrong to me. i'm not attacking your skills, or your person, or anything else. you say you are relatively new at this, so some kind of software config problem or equipment problem is much more likely than you fibbing about how you acquired the data.

again, i wasn't really questioning that it's 10.9 hours from bortle 3. i'm saying, to my eye, it doesn't look like it. and the signal in the subexposure does seem low to me, so this makes me wonder if something else is wrong here.

somewhere on my other computer i have a comparison of a single 1800s L frame from a dark site and 1800s of integration from bortle 8 on the same target. the difference is absolutely staggering. you almost don't need to stack anything from the dark site.

With all your experience, can you give me some suggestions on how I can improve things? Is more integration time needed? Would exposures longer than 120 seconds help? Would a different filter help, or removal of the uv/ir cut filter I'm currently using? Is it perhaps the way the integration was done in Pixinsight? Could it be the fact that there was a bit of moon on a couple nights?

that's what i'm saying. i don't think more integration time should be needed on this target from that location. mainly the important thing in any subexposure (actually the only thing) is to swamp the read noise of the camera. on a CCD camera, there aren't too many parameters other than the subexposure length that the user can vary to make sure of this. on a CMOS camera, the gain is relevant to this problem. i don't use a camera with that sensor, but i know that there are probably "canned" gain and offset values that people tend to use for broadband and narrowband. if you've already checked that off the list then the only thing to do is calculate if your subexposure is long enough to swamp the read noise of the camera at the gain you are using.

i don't think you'd want to remove that filter as fred has shown that it does not cut the Ha signal. that is why i asked if you were using the DSLR - the image looks to me like an image taken with an unmodified camera where most of the Ha signal is attenuated before it hits the sensor. and also the UV cut filter is probably helping you with blue bloat since the OTA is apparently a doublet. i guess you *could* try without the filter to find out if the filter's passband is as advertised. filters being mis-labeled or having quality control problems is definitely something i've seen over the years. some filters also have a directionality, if only because they have an antireflective coating on one side but not the other.

yes the moon can mess up your target SNR as you end up capturing mostly skyglow. how far was the moon from the target and what phase?

as far as PI's integration is concerned, really the only things going on are how the frames are normalized, how they are weighted and whether or not pixel rejection was used. WBPP's weighting algorithm is pretty fancy, so that is probably OK. if you don't have any gradients in your image (and from that location, absent the moon, you shouldn't), then the "regular" normalization built into ImageIntegration is likely fine.

the main thing to check with ImageIntegration is that your pixel rejection was not too aggressive. you can integrate with and without rejection and compare noise (with the NoiseEvaluation script). a marked difference can indicate that too many "real" pixels were rejected. or, you can just look at the rejection maps coming out of II - if you see 'structure' in the maps (like the shape of the nebula) then the rejection is possibly too aggressive. having said that if you have some number of frames that are real outliers in brightness or were somehow calibrated improperly, you might see structure in the rejection maps (especially the low map) which is really just the result of some number of frames that just didn't normalize properly or had bad calibration.

short of having all the raw subs and running them all myself, i don't know what else i can suggest at this point.

rob
 
What are you seeing, or not seeing, that makes you think there's a lack of data, or the data is weak?
I confess I don't see any problems with this image (but I don't have Rob's experience).
Your final integrated SNR is almost certainly reaching the point of diminishing returns (I doubt if there would have been much difference if you only integrated half the frames). The key content in this image is not faint, so pulling faint data out of the background noise is really not the issue (there is quite a lot of truly black nebulosity in this image with virtually no detail in it at this angular resolution), in the "deeper" areas you are detecting faint stars to the angular resolution limit of the scope, with hardly any background noise:
1624097227036.png

With Bortle 3 skies (lucky you :)) your exposure time can be more or less as long as your guiding allows, but more short exposures provide insurance against occasional compromised frames (and supports HDR processing if you want to keep unsaturated bright stars). With my (very similar) 2600MC camera I leave the gain permanently at 100 (my exposures are never long enough to saturate at this gain).
With this target and your camera and telescope, I can't see any obvious improvements to look for. Merging with a narrowband H-alpha image would be an interesting development.
 
Yes, when I moved here 17 years ago I had no interest at all in Astrophotography, so the fact it's Bortle 3 does make me very lucky. I can see stars right to the horizon through the tress. I can't actually see to the horizon, but you know what I mean.

Here's another integration I did last night. It's 1/3 less subs (227 instead of 329 at 120 seconds), I was a little bit more selective with the Subframe Selector. I also used a reference frame based on the best SNR, which also had near the top of the range, FWHM, and Eccentricity.

I noticed in the first integration I posted that the subject wasn't centered. These integrations are over a few night's, and the first night I didn't frame as well as I would have liked. Subsequent nights I did have the framing better. So now that I've picked a reference frame, with good numbers, and better framed, maybe this one will show better results? The first integration I posted didn't have a reference frame, so I guess the script picked the first sub, which wasn't centered, and used it for alignment with all the rest?

I'm also including the log, so maybe you more experienced folks can have a quick look, and see if anything was missed, or could have been better selected, as far as settings. Again, this was all done using the WBPP script in it's "initial state", I hope. I know Winsorized Sigma Clipping was used, because I was prompted to use it as a better solution by the script. Other than that, I think all settings are initial state.

So, this integration is 7.5 hours, from a Borlte 3 site, using the initial settings in the WBPP script. Should add, both integrations that I've posted, already had subs with poor exposures removed using Blink.

Also, these integrations are my "guide" for lack of a better word, as to whether I spend close to $400 Canadian on a piece of software. If I can't get good results from these integrations, then there's no sense me spending the money. I may as well just continue to use the tools I already have.

https://drive.google.com/drive/folders/1dUge050ug8yLOMQsSqOkuUvd4UHv4BGK?usp=sharing
 
Last edited:
Here's the stats from a run of the Image Solver script. Not sure what the numbers, or the score means. Still trying to find answers online.

image solver.jpg
 
The details don't really matter. The important thing is that the image is solved. Now you can run PCC on it and get the colours calibrated. You can also run AnnotateImage and get a pretty annotated version.
 
Thank you for having a look Fred. I've added a 120 second Dark from my library to the Google drive folder. I've also included a DarkFlat, and a Flat, from my Library.

I'd still like to know from someone what that score shown in the Image Solver stats represents. It must mean something, or it wouldn't be there. Why "score" something, if it has no meaning? Can it be used as a comparison of "something" against other integrations?
 
Last edited:
what that score shown in the Image Solver stats represents.
Well, you will see that the number of valid stars used for the solution is 740(=stars);
You will see that the rms error of the solution (in pixels) is 0.8148(=rms);
If you calculate: (2*stars)/(1+rms) you get 815.5 (to the 4 figures precision of the output rms) = score.
So that's what the score is. But why? I don't know, but:
  • if the rms error is 0, score=2*stars (as big as it can get)
  • if the rms error is 1 pixel, score = stars (half the perfect score)
So: the more stars the better (proportionally); the bigger the rms error the worse (inversely).
 
Back
Top