Dear Georg, you understood perfectly well the purpose and use of both tools!! Regarding your questions:
“For the mere mortals amongst us: Is there a simplified flux calibration procedure that we can use when imaging with one telescope at one geographic location in one night with a DSLR camera (with only approximately know filter, telescope mirror/transmission and sensor characteristics)? For example calibrating against a known star?”
The correct way is always to calibrate with standard photometric stars. This is how professional astronomers did the flux calibration (a.k.a. photometric calibration). But you need at least to know the filter bandwidth!! Let us suppose that your image is already corrected of flats, darks, bias, etc... If in your image there is a star for which you can know the correct amount of energy flux through your filter, let’s say “f” ergs/cm^2/s, and you have in your image for that same star (and through that filter) a measured number of “c” counts once subtracted the background, then the factor to calibrate all your image is just “f/c”, this is the number to multiply for passing from image counts to energy flux [To obtain spectral energy flux (i.e. per nm), you should divide by the filter bandwidth, too]
“Filters usualy have a bandwidth, sometimes rather large. Where does this go into the computation?”
B3E relationships have been deduced for the monochromatic emission of black bodies, not for their emission in bands. Unfortunately it is not possible an equivalent analytical formula for black bodies seen through broadband filters. All instances of the Planck equation integrated within a spectral range need to be solved numerically, because its integral is analytic only over 0 to infinity (the Stefan-Boltzmann equation). Doing such numerical fit for every pixel in the image is, simply, a complete nightmare.
Thus, when observing black bodies through filters, these calculations will be right for narrow band filters (once corrected the intensities by dividing them by the bandwidth), and will diverge gradually as a function of the band width. Fortunately, filters in standard photometrical systems are narrow enough, and a numerical integration is not necessary in general. Dividing the intensity measured through a filter by the filter width gives a value rather close to the monochromatic emission at the effective wavelength of that filter.
Then, to answer your question: B3E assumes spectral inputs!! So, if your images come from filters with different bandwidth you should divide previously every image by the bandwidth of their respective filter to approximate them to a spectral behavior (you don’t need to do this if your images are the output of the FluxCalibration module, as it gives spectral fluxes).
“How would I use B3E to generate an RGB image? Sensitivity of the eye is not located at 3 exact wavelength, but rather in ranges.”
Well, in principle you can do a color image even using three narrowband images centred in three arbitrary wavelength, as you know, colors does NOT exist in the Nature, it is simply an interpretation of our brain. You can take an image taken in infrared, another in ultraviolet and a third in the blue zone, and there you are! a color image (this is rather usual in ESA or NASA images). But nevertheless, for approximating a “natural” RGB “way” you can consider this scheme:
Suppose you have two images obtained through two photographic R and G filters, which have bandwidths of, for example, 60 and 80 nm respectively, and whose central wavelengths (the “middle point”) are 630 nm and 540 nm respectively. And you want to generate a synthetic image corresponding to a B filter of bandwidth 70 nm and with central wavelength of 450 nm. Then:
a) Divide your R image by 60, and your G image by 80 (but keep the original ones).
b) Use the two new images as input for B3E, selecting as input wavelengths 630 nm and 540 nm, and as output wavelength 450 nm.
c) your output image is assumed to be a narrowband one centred at 450, but you want to simulate a 70 nm bandwidth B filter, so multiply the output image by 70, et voilà, you have your synthetic B image.
And regarding Alejandro Tombolini, this can serve also as an answer: a possible usage is to generate color images from only two filters, in a more natural way than other methods.
Regards!