with my DSLR and camera lenses, i'd sometimes notice that the 3 color planes are not all in focus. many times the red channel would be out of focus compared to the green and blue channels. i'm not sure if this is due to IR bloat, or simply that when using refracting optics, chromatic aberrations are always with us.
at any rate, because the FHWM in the red channel is higher than the other two, the PSF computed from the RGB is probably going to be the wrong size for the red channel, which leads to ringing in the red channel, and blue/green artifacts in the final product.
so if i deconvolved this data, i would do a separate PSF on the R channel. usually the same PSF was useable on the G and B channels. but you have to deconvolve 3 times on the extracted R,G,B which is kind of a pain.
there's also the topic of making a synthetic luminance channel from your RGB data. however, this is not quite as simple as just extracting L* from the RGB. at the very least you need to set the RGB weights to 1,1,1 with the RGBWorkingSpace tool before doing this. Juan tried to explain this all to me in the context of deconvolution, but i'm still not sure what the right way is to create synthetic luminance. L* is actually Lightness and is not the same thing as Luminance. and Lum created from RGB is not actually the same as the true luminance because of the gaps between the RGB filters in the bayer matrix.
anyway, assuming you can properly create a synthetic L channel, you could deconvolve that and then do an LRGB merge. this avoids having to do per-channel deconvolution on the RGB.