I've given this a little bit of thought, but as an initial pass it could be possible to:
- Extract stars and fit PSFs
- Measure PSFs, and apply a k-means clustering algorithm
- Fit a k-dimensional surface to indicate which PSF belongs to which part of the image, each dimension in k maps to a average PSF of that cluster
- Iteratively run deconvolution for each dimension (e.g. "layer") using the fitted surface as a mask
so for a dimensionality of k=5, the PSFs extracted from the image would be clustered in 5 different buckets. The average PSF of those buckets is computed. Since you know which PSFs belong to that bucket, you know the coordinates for the stars that correspond to that PSF. Then you can fit a surface (e.g. 2D surface spline) to indicate where that PSF belongs in the image. Then it's only a matter of applying deconvolution for each layer, masking out the areas that don't belong. To make sure each part of the image only gets deconvoluted once, it probably requires unit normalization on the columns. Although instead of fitting a surface for each layer, it might also be possible to segment the image, but that might introduce other artifacts I think.
... now I'm not much of a mathematician, so this might make absolutely /no/ sense … but curious about the ideas.
EDIT: Of course this is a poor mans substitute for multichannel/multiframe blind deconvolution as described in "V Zhulina, Yulia. (2006). Multiframe blind deconvolution of heavily blurred astronomical images. Applied optics. 45. 7342-52. 10.1364/AO.45.007342" (and their references to similar techniques) … but for the method above I might have a some idea on how to implement it