does this make sense? even though subtraction is a linear operation, because the amount subtracted from each pixel is different, does it lead to this change in the noise?
integration1_clone
Calculating noise standard deviation...
* Channel #0
?R = 6.825e-05, N = 7072426 (57.14%), J = 4
* Channel #1
?G = 7.299e-05, N = 6927426 (55.97%), J = 4
* Channel #2
?B = 7.397e-05, N = 4916406 (39.72%), J = 4
integration1_clone_DBE
Calculating noise standard deviation...
* Channel #0
?R = 2.917e-04, N = 7072600 (57.15%), J = 4
* Channel #1
?G = 3.120e-04, N = 6927503 (55.97%), J = 4
* Channel #2
?B = 3.162e-04, N = 4916273 (39.72%), J = 4