well, i guess to start off, the statistical properties of an image don't really correlate with the subject matter directly. when PI is doing noise evaluation it's not really trying to detect what features of the image are signal and what is noise.
imagine a grey image where all pixels have the value 0.5. this image is noise-free; there is no variation in pixel values. if you went in and randomly added and subtracted random values from that image, then you'd start to be able to talk about what the mean value is, what the median value is, what the standard deviation of the noise is, and all kinds of other statistical properties. obviously the image with all pixels 0.5 have these statistical properties, but they are not very interesting as they basically just tell you the image is noise-free.
as far as your calibration frames go, you should strive to minimize the noise in the master frame. for bias and darks this really means simply integrating a lot of subs. darks usually need some kind of data rejection, since the odds that a cosmic ray hits your sensor during a long exposure is reasonably high. for your flats, you should strive to make the brightest part of the flat as bright as possible while still remaining in the "linear" range of the sensor. this range differs from design to design, so you may need to measure it for yourself. a rule of thumb that people use is that the brightest part of the image should be at 1/2 well, meaning if the ADC gives you values of 0-65535, then you are looking for 32767 or thereabouts. generally most sensors remain linear past that point, but it's a good idea to check.
rob