when estimating a PSF on an RGB image, DynamicPSF generates estimates on each color channel for each selected star. Then, when it generates an estimated PSF for the set, it is a monochrome image. I guess this is the luminance channel.
It isn't the luminance. DPSF generates a synthetic PSF image for the selected stars. If a star has several PSF fits (for several channels for example), each fit is treated as a different PSF. So we could say that the generated PSF model is more like an "average" of the selected stars.
In the current version of the DPSF tool, if you want to get a synthetic PSF model for each color channel, you have to split the image first and apply the same instance of DPSF to each channel. This is very easy to do if you save a process icon (just a few clicks).
The, the deconvolution tool offers the possibility of deconvolving each channel separately, but using a single luminance-based PSF.
For OSC and DSLR data, the PSF of the image should be the same for the three RGB channels, unless the optical system has severe color-dependent aberrations---and in such case one should fix them at the hardware level. The DynamicPSF tool should give you negligible differences among the three channels of one of these images.
For separate RGB and LRGB acquisition, it is true that the "strictly correct procedure" would be to treat each color component as what it is: a different image. In practice, however, doing this does not make any sense in most cases. The main reason is that what you deconvolve is the integration of (hopefully) many images, not a single frame, since you need to increase SNR for deconvolution. Hence, the PSF of each channel is actually an average, which tends to be the same for the whole data set, and also
tends to be Gaussian.
Wouldn't it be more consistent to carry thru the full RGB PSF model
Perhaps more convenient in some cases, but not more consistent. The Deconvolution tool works with a monochrome PSF model for simplicity, and mainly for the reasons exposed above. Again, if you want to deconvolve each channel separately, you can split the image first and recombine it with the deconvolved channels.
You may get bleeding in one color if working with a luminance only PSF.
If that happens, then it is probably because (a) you are trying to deconvolve marginal (i.e., too noisy) data, (b) you are using a nonlinear color space to perform luminance/chrominance separations, or a combination of both. If you apply deconvolution correctly to the luminance (which is *not* the same as the CIE L* component), then this shouldn't happen. You can also protect your bright stars---which are "singularities" for deconvolution---with a suitable mask.
So then what's the point of the "target" setting in the deconv. tool?
i don't know - i wonder if it's broken? i don't think it can have the effect of creating a synthetic Luminance image because otherwise i'm not sure why i'd see different ringing in each channel the way i did.
No it isn't. Perhaps we are confusing luminance and lightness here. Deconvolution only makes sense for linear data. If applied to a nonlinear color component, such as a synthetic CIE L* component (lightness), then it is being used unrigorously with "cosmetic purposes", but not as deconvolution.
For this reason the Deconvolution tool offers you to work on the CIE Y component (luminance) or on separate RGB/K components: these are the *only* valid options for deconvolution.
However, special care must be taken to deconvolve the implicit luminance of a RGB color image:
- Of course, the raw integrated data must be used, i.e. the unstretched output data of the ImageIntegration tool.
- The RGB working space of the target image must be linear. Open the RGBWorkingSpace tool, extend the Gamma section, and uncheck the "Use sRGB..." option. Then set Gamma=1. Now you should define custom luminance coefficients. For compatibility purposes, the default weights are those of the sRGB space, which are quite inappropriate for DSOs (e.g., the green channel has ten times more relevance than blue and three times more than red). If you want to use a
uniform linear RGB working space, then set the three coefficients equal to one. This is usually the best option. Now apply the RGBWorkingSpace process to the image.
- Select the Luminance (CIE Y) target in the Deconvolution tool.
- Use a suitable ScreenTransferFunction to inspect your image during deconvolution.
Let me know if this helps.