Hi Dave, OK, this will be a good challenge for my memory, AND my 'understanding' of Deconvolution.
First - think of the 'ideal' world - a star is a 'pinpoint' light source and, as such, it can ONLY illuminate ONE pixel on your CCD sensor.
A matrix representation of this might be
0 0 0 0 0
0 0 0 0 0
0 0 1 0 0
0 0 0 0 0
0 0 0 0 0However, in the 'real' (theoretical, still) world, we have diffraction effects of the optics, so the 'best' you can then get is the size of the 'Airy Disk'. In other words, your 'pinpoint' has been 'convolved', or 'spread out'. Now the matrix representation might be
0 0 0 0 0
0 1 1 1 0
0 1 3 1 0
0 1 1 1 0
0 0 0 0 0When you then add in atmospheric distortion, you might then see
0 1 1 1 0
1 2 3 2 1
1 3 5 3 1
1 2 3 2 1
0 1 1 1 0The aim of deconvolution is to figure out a way of creating an 'inverse matrix' such that what you 'actually see', can be made 'more like' what you 'actually want'.
However, if you have 'stretched', or otherwise modified, your original raw data in a 'non-linear' way, then the deconvolution algorithm does not really have a chance of creating the inverse matrix, because it CANNOT know what YOUR 'extra' transformations were. The HUGE advantage of PixInsight is that you can perform a whole host of LINEAR transformations to your data, and still hope to be able to figure out a deconvolution 'kernel' (or matrix).
OK, if you weren't going to have much success in establishing a deconvolution kernel without those transformations, then those linear transformations wouldn't make things any 'worse'. And, of course, the MAIN non-linear transformation, the Histogram Curve that actually allows you to 'see' what is in your image, doesn't have to be applied until much later on in your workflow, because you can initially use the STF 'visualisation' transfer to 'see' the image, without actually 'changing' the LINEAR data needed by Deconvolution.
Clear as mud
Cheers,