From the information your are providing, it seems there is no bug here, just the normal behavior. ImageIntegration uses existing noise estimates in input files by default. Estimates of the standard deviation of the noise are normally stored as NOISEXX FITS keywords (where XX is a zero-padded integer representing a zero-based image component index). These estimates are being generated by different tools and scripts, such as ImageCalibration, Debayer and SubframeSelector, among others. The idea is that noise estimates should be computed from calibrated raw,
uninterpolated data (i.e., before image registration). This behavior can be disabled by checking the
ignore noise keywords option of ImageIntegration, which forces recalculation of noise estimates (however, you should clear ImageIntegration's cache first to make sure that existing estimates are not being retrieved from the cache).
If the NOISEXX estimates were computed from uninterpolated data, but the image that you are using has later been interpolated (for example, because it has been registered), then the NOISEXX values no longer correspond to the current pixel data of the image. ImageIntegration, ImageCalibration, Debayer and SubframeSelector compute noise estimates using an algorithm based on the multiresolution support (MRS), so another possibility is that the existing NOISEXX values have been calculated using a different algorithm, which may give slightly different values. In both cases, getting a final effective noise reduction (ENR) value that is not exactly 1.0 is normal in this case.
What seems to be happening here is that it estimates the noise in the image(x), takes the value reported by Noise00(y) and then computes y/x, which is reported as the improvement factor.
ImageIntegration evaluates the standard deviation of the noise
in the integrated result image using MRS, then this noise estimate is compared to estimates acquired from the input data set (either NOISEXX keywords, newly computed estimates, or estimates retrieved from the cache). If different algorithms are used for noise evaluation, or if the original noise evaluation was done before interpolation, in general the resulting quotients won't be one, even if three or more duplicates of the same image are being integrated.
Anyway, these differences have no practical consequence. It can't be stressed enough that
effective noise reduction estimates are not absolute values (see the documentation for ImageIntegration). Your goal is to maximize ENR with the appropriate rejection of outlier pixels for a given data set. The particular ENR values being used are immaterial, as far as correct implementations of valid methods are being used during the whole process, starting from the raw data.