Hello all
I have developed the attached script as an educational and hopefully useful tool for myself. Others may find it useful, but I stress that it has applicability only in some circumstances. There are also some areas I am still exploring. Sorry in advance if this has been already done by someone else, and also for the length of this post.
The script, ‘CombineImages’, allows the user to select a number of monochrome images, assign whatever RGB colours are desirable for each image, and then combine them to form a colour image. The script does not seek to replicate existing PixInsight tools such as ‘ChannelCombination’ or ‘NBRGBCombination’. Standard narrowband palettes are supported, but you may wish to use a more appropriate tool if you are using standard three-colour palettes.
Note, you will need to open the images in PixInsight before running the script.
Why? This script allows the assignment of almost any colour in the rainbow, e.g. ‘Orange’, to each monochrome image. This may be useful for aesthetic reasons, but more likely if you wish to combine several images (not just three) from various sources where you may wish more distinction between the elements being displayed.
Ok, really why? I have always felt we might be doing things backwards. We choose a channel, say ‘Red’, then choose a filter, say ‘Ha’, to assign to that channel. I would prefer to say: I have a ‘Ha’ image and wish to assign it to ‘Red’. In the simplest case, this is just another way of saying the same thing. But it becomes very different if you wish to assign ‘Ha’ to a colour with Red, Green and Blue components (Yes, I like ‘Orange’).
Also, I have always been troubled by the way we might blend in a simulated ‘Hb’ component. Often, the PixelMath might be something like this: Blue channel = (0.85*OIII)+(0.15*Ha). I understand the reasoning given the relationship between the Ha and Hb distributions in space, but I’m not sure I like the math. Also, the pixel math could quickly become indecipherable (at least to me) if any more filter types (from other sources) were to be blended. CombineImages offers a different way of doing this (e.g. select a Ha image [together with a Blue hue] and use a reduced ‘Amount’ factor to simulate Hb).
What’s wrong with the script? It may well have been done before; if so, I’ve had fun. Also, I have used C++ before, but it has been an interesting learning experience using JavaScript in the PixInsight environment – there could be some undiscovered bugs. More importantly, the script currently uses pixel math to multiply the assigned RGB components by the monochrome image pixel value (0-1, a type of modulation) and also by an ‘Amount’ factor (normally left at 100%); then the respective channels for each image are summed/rescaled. I am not an expert in colour spaces (far from it) but I wonder if the math of linear colour spaces should be used instead – i.e. use monochrome linear images as pseudo-luminance images to be applied to the RGB values (as pseudo-colour images) in a linear colour space.
The coding is my own, but I do use a couple of helper functions in a supporting file that include a small amount of modified code to support auto-STF. I would need help with an appropriate copyright statement if the script proves useful.
Anyway, this post is far too long as it is. Any thoughts on the utility of this script?
Regard to all
Dean
[v0.1a removed. See later version]