So this is getting back to my query about human perception. There is no rule that states that our vision would follow a (1,1) MTF, as far as I know. So perhaps relying too heavily on the (1,1) MTF is the source of the problem with too much red?
Human vision roughly follows the gamma curve (~ x^n). In PI the gamma curve is not used because the MTF yields a more contrasted look, and hence faint features are "better" represented (in terms of visual impact). It do not tries to mimic the human perception, but to enhance the representation of the data. And, in that sense, the MTF works quite well. An intermediate curve is the logarithmic function (log(x+delta)/(1+delta)).
Despite the function used, I believe that all of them have almost a linear response for low values. So, I would inspect the data while linear to find any differences in the color balance. Also, if you have a neutral gray in any part of the image, after the nonlinear adjustment (MTF, gamma, whatever) it should remain, if you use the same parameters to all the channels. If there is a red cast, then the MTF will probably amplify that due to it's greater contrast (the ration between the channels should remain more or less the same, but with greater saturation due to the higher contrast).
If you want to not change the hue of the image, then here are some possibilities:
- Process the image in any luminance/crominance space that lets you preserv the hue information (HSL, Lch, for example).
- Extract the hue of the image before stretching, and then insert it again after that step.
- Stretch the Value channel (HSV model). It seems that this procedure yields better colors, at least in the stars.
Regarding your equations for stretching, this just seems like a gamma stretch, with a modified green channel. If gch = 0, you are completelly replacing the green information with the one from the red and blue channels. If you are worried about keeping a "documentary" approach, I would not destroy such data in that way.