Hi Harry,
why colour psf are not allowed / desirable.
as I use a one shot colour camera Can I use the luminance for the pcf or would I have to create a grey scale image from the rgb image I have
Color PSFs are not supported in our present implementation, mainly due to technical limitations.
However, unless you have some exotic optical problems in your imaging train, it is very unlikely that you ever need a different PSF for each individual RGB channel of a one-shot color image.
Of course, you can extract your PSF from the luminance of your original image (e.g. from a small area that includes a star). This will be a grayscale external PSF that you can use to deconvolve your RGB image: the same PSF will be used to deconvolve each channel. To do this, you have to uncheck the "Luminance" check box on the Algorithm section of Deconvolution. However, doing this is usually a bad idea.
Deconvolving the three RGB channels separately is generally an error. Usually it is much better to deconvolve just the luminance, leaving the chrominance intact. Due to the characteristics and limitations of the human vision system, the luminance is responsible for almost all of the detail perception. For this reason, deconvolving the luminance and chrominance together will increase noise (by transferring chrominance noise to the luminance) and provide no additional detail improvement.
Deconvolution must be applied to linear images, that is, before any nonlinear transformation, as a histogram stretch. Keep in mind that deconvolution should be applied with a physical justification. No PSF can be valid for all pixels of a nonlinear image simultaneously. To deconvolve the luminance of a linear, one-shot color image, you must check both the "Luminance" and "Linear" check boxes of the Deconvolution interface.
I recommend you to read this tutorial, which includes a practical deconvolution example:
http://pixinsight.com/examples/deconvolution/Gemini-NGC5189/en.htmlThe example above is with three separate narrowband images, but it provides a lot of important information and shows you many techniques that you can directly apply to one-shot color images. The main difference is the "Linear" option of Deconvolution that must be enabled in your case, as I've said above.
I have also understood that it is best to use a external psf, why does your opinion differ from this?
Because it is very difficult to extract a good PSF from the image itself. In most cases, it is almost impossible. This is because any subimage extracted from the original data will generally be affected by local irregularities, as noise and other spurious structures, that will invalidate the extracted PSF unless it is further transformed.
It is much more efficient and accurate to use a synthetic PSF based on a model built from direct measurements. For example, if you know the FWHM for a set of stars in your linear data, then you can use the FWHM value to derive a synthetic Gaussian PSF. This is the standard procedure for deep-sky images.