Hi Simon,
ACDNR does not work with linear images. The same is true for GREYCstoration.
The new noise reduction algorithm that I implemented in ATrousWaveletTransform does work very well on linear data.
As a general methodology, I don't like the concept of
image processing workflow too much. Each image poses different problems. What is important is to know what the different tools and techniques do, why we apply them, and how must be the data in order to apply them. The answers to these questions are usually quite complex and depend too much on critical characteristics of the data as to fit well in the space of fixed workflows.
Having said that, IMO there are some general guidelines:
- Image calibration must be very accurate. In particular, flat fielding is very important. If flat fielding is poor and the image has additive illumination variations (almost all images have them today), then there is no way to correct for multiplicative variations.
- Always work with at least 32-bit floating point data.
- When necessary, background correction (DBE, ABE) must be applied first. This is because if the illumination isn't constant (flat) throughout the whole image, no (useful) statistical model can describe the image. For example, in presence of light pollution gradients, an image cannot be stretched because its histogram doesn't describe the image in terms of background/objects.
- In the case of color images, I'd neutralize the background (BackgroundNeutralization), even after background correction. If the background isn't neutral, no color calibration strategy will work easily. The reason is that in order to calibrate color following some solid criteria, the contributions of the background for each color channel should be the same (neutral). This is necessary because the limits between the background (=noise) and the objects (=signal) are uncertain by definition. If your image has no free background areas (or almost), then you're in trouble.

- While the image is still linear, I'd decide whether to apply deconvolution or wavelet-based edge enhancement, or maybe none of these. Deconvolution should only be applied when the signal to noise ratio is very high; otherwise it is a tremendous waste of time and a quick path to destroy the data. Wavelet noise reduction can be applied here, perhaps at the same time as we enhance small-scale structures. Be prepared to build masks in this phase, especially star masks to protect stars and other bright objects.
- A nonlinear stretch should come here. This must be implemented with HistogramTransformation. HT is a high-precision tool that allows you to implement an aggressive stretch in a single operation. Please try to avoid a cascade of histogram transformations
ad infinitum, which may be necessary with other applications that offer poor implementations, but not with PixInsight.
- Once the image is nonlinear, consider HDRWaveletTransform. This is the hardcore part.
- Noise reduction comes here. ACDNR is a purely isotropic noise reduction algorithm. Isotropy is a very important property of all algorithms applied to process astronomical images, especially deep-sky images.
- Color saturation, along with other curves if necessary, should be applied after noise reduction. Otherwise you'll boost chrominance noise to a "beyond repair" point. Masks are your best friends here.
- Additional noise reduction may be necessary here. You may consider GREYCstoration as a final "polisher", but be careful because it is an anisotropic algorithm. GREYCstoration is IMO the best general-purpose noise reduction algorithm, as good as (and in most cases far better than) its commercial --and pricey-- competitors. I often laugh when I see people using some of these inferior solutions as plugins whose cost is comparable to a PixInsight license

This is a simplified sequence, and reflects only my opinions, which may be wrong / partial / inaccurate / inappropriate / ... / for any particular case.
Finally, this is a
linear processing sequence. There are much more sophisticated strategies based on multiscale processing, which basically consists of separating image structures at different dimensional scales, with the help of wavelets, and processing them separately. This is Vicent Peris' specialty

See an old but excellent example
here.
Hope this adds more help than confusion.