Surely the final outcome depends entirely on what the PixInsight user feels happiest with?
The human eye cannot make all of these colour distinctions in the first place, with our retinae not having been blessed with narrow-band filters. So, whatever final outcome and colour-mix you might choose, in this case you very much get the chance to play the hand of a greater Deity.
Yes, you are perhaps trying to approximate what you believe a 'bionic' eye might perceive, but you have to just go with what you are happiest with.
Perhaps what we need, for narrow-band images, is a Process that gives us three (or even more?) slider-controls, each of wjich is 'tied' to a narrow-band image that we would like to incorporate into an existing (or new) 3-channel 'colour' image. The sliders, moving left and right, would represent 'where' in the colour spectrum (i.e. at which wavelength, in nm) the images should be placed. Along with each 'centre frequency' that the main slider would control, there could be a second control (or pair of controls) that would define the lower- and upper- 'sidebands' of this centre frequency - i.e. they would convey a bandwidth, or passband, for the narrow-band image that is to be incorporated.
With this Processing Tool, and a real-time preview of the final image being built, the colour mixing could be a very dynamic process. It could even help with the colour blending of RGB images - allowing RGB filter sets to be characterised and correctly mixed (even for OSC cameras).
But, maybe (I don't know) I am just missing some vital point. Has anybody ever tried something along these lines?