Some general question(s) on working with linear vs non-linear images.
Which processes work better with a linear image and what processes change a linear image into a non-linear image.
Obviously a histogram stretch is going to change a linear image to a non-linear image (and I assume that it's the use of the mid-tone adjustment that causes the non-linearity of the result). But what other processes out there might do the same (as a "hidden" effect)?
The question comes from a comment in one of the processing examples The Region Around NGC 7000 and IC 5070: ATrousWaveletTransfrom and HDRWaveletTransform in PixInsight
Which processes work better with a linear image and what processes change a linear image into a non-linear image.
Obviously a histogram stretch is going to change a linear image to a non-linear image (and I assume that it's the use of the mid-tone adjustment that causes the non-linearity of the result). But what other processes out there might do the same (as a "hidden" effect)?
The question comes from a comment in one of the processing examples The Region Around NGC 7000 and IC 5070: ATrousWaveletTransfrom and HDRWaveletTransform in PixInsight
This is very important. Note that all procedures applied so far are purely linear transformations. For deconvolution, the linearity of the data is actually indispensable. Wavelet transforms and most multiscale techniques are much more controllable for linear images. For example, working with a linear image, we can apply aggressive biases to small-scale wavelet layers, as in the example illustrated in Figure 17, without burning out the stars and other bright features. As a general rule, try to respect the linearity of the data as far as possible, until the last stages in your processing workflow.