BTW, perhaps somewhat off-topic, is there some way to have color selections other than just RGB?
It is probably best to 'think backwards, starting from the point of view (
) of the human eye. This is basically sensitive to three 'colours' - the Rd, Gn and Bu that are so familiar to us. So, in order to 'stimulate' the receptors in our eyes, we use the likes of a PC monitor to 'emit' those wavelengths of light, in variying intensities to each other, at different locations on the image. So, our monitors (nowadays) have three LEDs (one each of R, G and B) at every picture element (or, 'pixel'). In days of old, we didn't have these LEDs, so we used 'plasma' and even 'electro-luminescent phosphors' to achieve the same result. And, in fact, a printed picture behaves in very much the same way.
So, we need to have a means of recording or storing all of this colour data - and, in PixInsight (like other software) we do this by using three 'arrays of numbers'. These arrays don't store colour information at all (!!), rather, they store 'intensity' information, for each pixel in the X-Y array of pixels that represent our desired image. But - very importantly - the three individual arrays (or 'colour planes' as they are often known) are each assigned to one of the three primary colours: Rd, Gn and Bu.
So - in our colour images (FITS, XISF, TIFF, PNG, JPG, BMP, GIF, etc.) we can ONLY define intensities for these three primary colours (r, G and B) - nothing else. We cannot directly define Luminance information, nor can we define Narrow-Band data. Remember that critical point - we can ONLY define intensities of Red, Green and Blue.
But, that is not the end of the story - far from it, in fact!
It is entirely up to 'us' to decide what intensity level we want to store at any given pixel, and in any given primary colour channel. In the simplest of cases we often strive for a 'perfect RGB colour match' in our channels, such that the displayyed image is a 'true representation' of the colours we would perceive if we could look at the scene 'live' (and, commonly, the scene as it would appear if it was illuminated by 'white light', which is what we define or local star - Sol, the Sun - to emit).
But, this guy
will not come and kick down your door if
you chose to, for example, swap the Gn and Bu channels for some personal 'artistic effect'. You can do what you want, you can choose to emphasize one colour, or range of colours over others - you can even choose to 'de-saturate' your image completely, removing all colour leaving you with a simple monochrome, or Luminance, image.
Which finally brings us to the issue of NB imaging! It would be great is we had, say, another three channels of intensities in or (e.g.) FITS image - we could just dump in the H, S and O intensities and off we would go - BUT (and it is a massive BUT) whilst we
can add as many channels as we want into a FITS file, we have
no means of representing that intensity information on our three-channel monitors - which isn't really a problem given that our three-channel eyes wouldn't be able to decode that information anyway
Instead, what every image processor ('us', including those who might work in a chemical filled darkroom, or on other software packages aimed more for 'brightly-illuminated' image processing) must do is to take all of their source information (WB and NB, perhaps) and then to 'mix' this into the three available channels.
And, once again, 'how' this is achieved is not governed by any rules. Even 'guidelines' can be too strict a term. The 'mix' or 'blend' that a user finally chooses, will always be based on what 'they' feel they want to achieve. And, these desires can be defined by 'science' as well as 'art' - where certain areas of an image might be enhanced by using certain blending techniques (a 'scientific' approach) or where an overall image is besyowed with some 'aesthetic appeal' (an 'artistic' approach) to make the image 'look nice'.
So, until such time as Juan releases PixEyeball v1.0.1 (that uses bionic optical implants to link directly to PixInsight), we have to make the best of those three channels, and figure out our own methids for blending the data together.