Hi Bud,
There are no rules written on stone. It is your accumulated experience, your taste and your common sense what tell you when you're crossing the line —which is always somewhat thick and diffuse.
In general, it is much easier than it may seem at first. Looking at your website —which is very nice, by the way—, you undoubtedly are doing it very well right now, so I think you don't need to concern yourself too much as to where the bounds of processing are: you already know them.
In deconvolving an image, how far can I go before it starts to become "non-documentary" because of the changing of information?
In general, the decision to stop deconvolving an image isn't difficult. On one hand, regularized algorithms are globally convergent, so it is more a matter of using good PSF and regularization (noise reduction) parameters than of limiting the number of iterations. On the other hand, usually the artifacts generated by a wrong deconvolution are so conspicuous that there's no doubt at all.
I wrote a tutorial years ago about deconvolution of a lunar image:
http://pixinsight.com/examples/deconvolution/moon/en.htmlIn this case we had no direct PSF measurements over the image, so we had to find good parameters by trial-error. If you look at figures 3 and 4 you'll see that selecting good PSF parameters is actually quite easy. See for example on figures 4c and 4d that a slightly wrong PSF shape causes an immediate disaster. Figure 8 shows a more tough decision. In this case I decided that figure 8b is showing artifacts due to sharpening of marginal data, based on the fact that the smallest structures shown are too small considering the instrumental and environmental conditions.
With deep-sky images these things are more complicated, because the SNR is usually much poorer, so there is much more uncertainty. If you have measured or can measure the standard deviation of the PSF by analyzing stars on the image, always use that. Unfortunately, we still lack a PSF modeling tool in PixInsight, but this will change soon. Anyway you can easily guess a pretty good PSF by looking at the smallest stars on your image.
A more tough decision is whether to deconvolve or not. IMO, in most cases deconvolution is wrongly used. On one hand deconvolution does not make any sense but for linear images. On the other hand, deconvolution requires very high SNR. There are alternatives that can be as efficient as deconvolution, much faster, and can tolerate higher amounts of noise, such as wavelets.
My usual method is to do it until pleasing to the eye. This appears to be very unscientific, but does meet one of the rules of the DSA....specifically to "Produce emotions in the spectator."
This is the artistic part of astrophotography, and it is an
essential part. If you were to do science, in the sense of measuring brightness photometrically for example, you couldn't apply fancy stuff such as most PI tools. In fact, you couldn't apply any nonlinear processing to the image. However, science, art and astrophotography are much more and mean much more than that, fortunately.
The main goals of astrophotography, as I understand it, are described on the DSA founding statement document. The goal of serving as a vehicle for divulgation of science and culture is, in my opinion, the most important one. In this sense, one must try to maximize the amount of information transported by the final work. The problem is how to maximize information representation without starting to produce false information from marginal data. Having the knowledge and experience to be able to stop before crossing this barrier is the responsibility of a good astrophotographer.
And what if a high resolution image of this object isn't available?
When this happens then you've got good news and bad news. The good news is that you're pushing your own limits, which is always exciting and forces you to grow. The bad news is that you're alone
