- I see that only nonlinear (stretched) images can be processed. Is this an unavoidable limitation of the implemented algorithms, or just the result of lack of network training? The ability to work with linear images would be a very important feature of this process (example: deconvolution). If this is an algorithmic limitation I can devise some ways to circumvent it, which would be relatively easy to implement.

For a reasonable invertible stretch function, and assuming StarNet preserves the bit depth of its input, maybe something can be done

I'll experiment on this later today

EDIT/UPDATE:

After stretching a linear grayscale image using a simple MTF function, splitting into stars/background using StarNet, applying the inverse MTF separately at stars and background image, and then adding the result together, I noticed a loss in highlights. The most probable reasons for this are: It is not easy to find non-trivial functions f for which f(x) + f(y) = f(x+y) (think f being the inverse MTF and x,y the stretched background and stars respectively). My lack of knowledge on imaging processing fundamentals (albeit the documentation on HistogramTransformation was very helpful).

I'll end this here since it's the wrong place for this, and I'll patiently wait for your thoughts on the subject.