Hi Max
Try increasing more the shadows relaxation parameter. It should include more dark pixels, whithout including bright features. This way you avoid contributions from stars.
Small stars are handled fine by the algorithm, but it is better to increase the size of the samples, so it has more background pixels to average, and thus the statistic is more representative. Object data, of course, destroys the model. In case of doubt, don't put a sample!
Yes, darker pixels in the sample "preview" means that they have less weight for the statistical calculation. Black means no inclusion at all. Other colors are just a mix, showing the weights at each channel. It is quite difficult to evaluate anyway... so just imagine the luminance of them, to get an idea.
The samples are likelly not to be "clean", pure white. This happens 'cause there are two rejection algorithms at work. One is local, witch rejects stars, noise, etc., and the other is global. The global parameter supposes a very simple model (I'm not sure if it is a constant number, like the median of the image, or it makes a quick interpolation from a few samples) and rejects sample boxes that are too far away from it, creating a weight factor. The closer it is to the model, more it weights. After all, real background models are very smooth, so wild values usually means a bad sample.
Look, from the unoficial guide:
Wr, Wg, Wb: Statistical sample weight for the red, green and blue channel respectively. A weight value of one means that the current sample is fully representative of the image background at the sample's location. A value of zero indicates that the sample will be ignored to model the background, since it has no pixels pertaining to the background. Intermediate weights correspond to the probability of a sample to represent the background of the image at its current location.
And the unweighted checkbox in the model parameters:
Unweighted: By selecting this option, all statistical sample weights will be ignored (actually, all of them will be considered as having a value of one, regardless of their actual values). This can be useful in difficult or unusual cases, where DBE's automatic pixel rejection algorithms may fail due to too wild gradients. In such cases, you can manually define a (usually quite reduced) set of samples on strategic locations and tell the background modeling routines that you know what you're doing – if you select this option, they will trust you.
Hope this helps.
If you need assistance with those rebel images, let us know
(resistance is futile) VBG