Hi Georg,
I played around with your problem the other night, and I also spent a fair amount of spare time trying to understand the various colour systems that are available to us in PI. There are basically six of them, including the standard RGB.
These are as follows:-
HSV (also known elsewhere as HSB) - based on Hue, Saturation and Value (or Brightness)
HSI (also known elsewhere as HSL - based on Hue, Saturation and Intensity (or Lightness)
XYZ - not a system I can easily visualise, so it is still the 'least understood' as far as I am concerned
Lab (more officially CIE L*a*b*) - this is apparently the colour space that most closely mimics the response of our eyes
Lch (more officially CIE L*c*h*) - further investigation needed before I can comment on this one
In the first two spaces, the Hue and Saturation elements are actually identical - they differ only in the third component
In the L*a*b* space, you can modify the 'Luminance' component of the image without affecting either of the two 'Chrominance' components - although there ARE limits. These limits come into play when you (have to) return from the L*ab colour model to the RGB colour model in order to display the image on your screen. The problem is that, as the RGB values tend toward the upper or lower limits, they are 'constrained' by the finite limits of the data variable space available to contain them.
Put another way - at 'pure white', you have equal RGB values [1,1,1] and at 'pure black' the same occurs again - [0,0,0]. Considering 'pure white' again, and then moving very slightly away from this point, you could end up with a 'colour' of (for example) [1,0.999,0.999] - but this colur, although 'red-dominant', will really be perceived as almost perfect white. You would have to move away quite a bit from the [1,1,1] point before you could really say that the colour was 'faint pink' (because of the dominance of Red).
And the same can be said down amongst the black levels.
Which is a problem as you start to stretch your data to enhance the dimmer regions. If the stretch is a simple 'linear stretch' - using an expression such as ($T * 1.5) in PixelMath to give you a straightforward '50%' increase in brightness across the whole image, then you could only get away with using the '1.5' multiplier if the existing maximum ADU value in your image was less than, or equal to, (1/1.5), or 2/3 (0.667). When you apply the 50% increase multiplication, the [0.667] value will 'max out' at [1.000], with everything else also increasing in perfect proportion, but with NO data loss at the top end.
However, this 'linear enhancement' can only be applied where the image is NOT already as bright as it can be in some areas - i.e. where the maximum ADU level anywhere in the image is less than [1.000]. I doubt whether ANY of our astroimages would EVER fall into that category, and - if they did - it wouldn't be for very long!!
So, we need to forget about linear enhancement - which is where the Histogram 'MTF' function comes into play (the 'Midtones' slider position), along with the far more flexible Curves process, where the shape of the transfer function curve is infinitely variable.
Sticking with the Histo MTF for a few moments, this gives you a quick and simple method of 'boosting' the 'intensity' of an image. The clever 'shape' of the curve ensures that 'most' of the 'boost' occurs down at the dimmer section of the Histogram. Which is exactly where we usually need it. And the MTF curve 'flattens' out as it gets up to the brighter section of the Histo, exatctly where we want/need its effect least. But the Histo MTF curve is very much a 'brute force' tool - sometimes useful for that very reason, but not when we need 'finesse'.
When we need 'finesse', we have to turn to the far more flexible 'Curves' process.
So, now we know WHICH function to use - Curves, not Histo or PixelMath. But how to implement it, or to what part of an image?
Going back to my (very brief) introduction to colour spaces (or models) above, and rembering that the CIE L*ab model most closely mirrors what we actually can 'see' - and noting that the 'L' component reflects the 'Luminance' and that the 'a' and 'b' components reflect the 'Chrominance' of the image, then it is fair to say that we only want to modify the L channel of the data, and that - if we don't touch the a or b channels, we shouldn't affect the colour at all.
So, we can then use the Channel Extract process to create that distinction for use - giving us three new images (L, a, and b) from our original RGB image. And these three images can then be Channel Combined again to recreate the original RGB image - identically in this case, because we didn't change the interim images in any way.
But, if we change the L channel, then we can modify the 'Luminance' (or 'brightness') of the image prior to recombination - with the effect of changing the 'Luminance' of the final image - exactly what we are after (and this argument is really ONLY valid for the L*ab and L*ch colour spaces - I know that you tried things in the HSV and HSI spaces, but you DO stand more a chance of 'modifying' the 'perceived' colours if you are not working with the L*ab or L*ch models (and I prefer the L*ab model, as I understand it the best - which is, still, not a lot
)
However, you still have to 'be careful' in how 'much' you tweak the L-channel. Remember that, as you 'squeeze' the ADU values towards either [0.000] or [1.000] you WILL start to 'desaturate' the colours, because all three (Rd, Gn and Bu) values have 'nowhere to go' and so they start to 'come together' in value - and RGB values that are 'equal' represent a totally de-saturated 'shade of grey'. That is just a fundamental fact.
So, if we agree that you can preserve colour information by only processing the L-channel, and keeping the a and b channels untouched, and that the processing applied to the L-channel needs to have the finesse available in the Curves process (not the Histo or PixelMath processes), then how can you use Curves to achieve this?
Well, first of all, Curves actually allows you to process the L channel without all the hassle of splitting up and recombining the RGB data. You can define a transformation ONLY for the Luminance, and PI will extract the L-channel, process it and recombine it with the a and b channels without you ever needing to be involved - so that is one of the problems dealt with.
Then you need to understand how the modification of a 'curve' actually affects the data (for the L-channel here, but it is the same for the other curve options). The 'default curve' is simply a 'straight line' from the bottom left to the top right of the curve window. In other words every ADU value in the 'source' (along the x-axis, at the bottom) is 'mapped' via the curve line to an identical ADU value in the 'destination' (along the y-axis, up the left side). Typically, curve modification at its simplest involves 'grabbing' a point on this straight line and 'curving' it up or down.
For any source ADU value, if the mapping point on the curve (directly above its position on the x-axis) is now ABOVE the original corner-to-corner straight line, then the destination ADU value will be 'bigger' - and, for the L-channel, 'bigger' means 'brighter'. Conversely, if the new line is lower than the original diagonal at this particular point, the destination ADU will be lower, or dimmer.
So, now you can see that just a simple 'bowed curve' that you get by grabbing one point on the transfer curve, and dragging it upwards, is not really much better that the MTF curve in the Histo process. OK, you may get finer control, but the curve will be such that EVERY point is 'higher' that the diagonal corner-to-corner straight line was - and so every ADU value will be increased. And, for those values atthe brightest end of the histogram, you could still end up with 'data compression' - which will ALWAYS end up in de-saturation because, remember, values with similar ADU values are always shades of gray.
So, the trick then becomes one of creating a 'compound curve' - where only a small section of the curve is 'above the diagonal'. In fact, the standard enhancement curve is often the easily recognisable 's-curve' - one which often actually 'dips' at the bottom-left corner (which will serve to dim the faintest information, usually an advantage if this information is really just background noise anyway). Then the curve will rise above the line at the ADU level where the luminance enhancement is needed, before dropping back on to the original diagonal line for as long as possible - especially up at the upper right-hand end.
And, these curve adjustments need to be delicate. You don't want the 'dip' at the start to be so severe that you force your sky background all the way back to an un-natural 'glossy black'. You might not want a dip at all - you will have to judge that requirement on every image. Yo do want to be 'flat to the end' - and that means you need to get the curve back onto the path of the original diagonal as soon as you can - otherwise you will blow out the saturation of the brighter levels (which you could have done far more easily with a nice strong Histo MTF !!!). And you don't want 'sharp transitions' at the point where you do drag the curve off the diagonal.
Now, you can of course, apply the Luminance-Curve transformation through a Mask - which is how you will see it done in PS. That way your curve can be just that little bit more aggressive - because you will (hopefully) have eliminated all the brighter regions with the mask - meaning that they are now 'protected' anyway. Use a combination of ATWT and Histo to create a nice, feathered edge, mask - perhaps incorporating a star mask as well - so that only the areas of interest are exposed. And then see if you can enhance just that zone of levels that you are interested in.
You could even then invert the mask (exposing just the 'brightest' areas that were previously 'masked') and then use the Saturation-Curve to see if there is any extra detail available in those areas. (The PI 'behind the scenes' action is to split the image into HSV or HSI channels, and to then apply your Sat-Curve to the S-channel, before recombining back to RGB - again, saving YOU all the hassle of doing it yourself - which, of course, you can if you want to get a feel for the individual steps of the process).
Hopefully I have managed to get all of this right. This is certainly how I have understood the porcesses to work, given hours and hours playing with PI, and even greater amounts of time trying to understand all the theory of image processing from HAIP, through all of Ron Wodaski's excellent books, and an infinite number of times pressing <Ctrl-Z>
Cheers,