Hi Georg,
My first thought is to take the derivative of the image...this gives the slope. The saturated parts will have a slope of zero....and not have any noise on that slope, which should make it very obvious from any other flattish unsaturated areas in the image...because these will have noise on them. So even if channel X has been multiplied by 0.5, the 'flat tops' will still be detected. This assumes that the images are still linear.
This can be used to generate a mask of the saturated parts of channel X....Mask1.
Then you need to 'sample' the halo to find its unsaturated colour. Do this by expanding Mask1 (MorphologicalTransform?) to give Mask2. Then do something in PixelMath (my brain hurts at this point) so that you end up with Mask3 which is just the halo area that is exposed in Mask2 but not in Mask1, i.e. a halo ring looking mask.
So Mask3 will show the halo parts of the image, i.e. the true(
) unsaturated colours of the stars.
Now you need to somehow 'correct' all the channel X pixels inside the saturated core so that they are in the same ratio with the other channels as they were in the halo. If you had a single saturated star in the image it would be easy. You just take the ratio of the three channels through Mask3 and then apply that to channel X through Mask1. But the problem is that there will be many saturated stars in an image, all with different colours. So you somehow have to apply this on a 'local' basis. I have no idea how you could do that, but I guess there are people here that do.
This might be problematic if there is a nebula behind the star...depends on the halo definition I guess.
Anyway, that's my thoughts. I hope it helps, or maybe its the wrong track to take?
Cheers
Simon