Hi Larry,
An image is linear if the values of its pixels follow a linear function of the source intensities.
In simple words: Suppose we have three pixels A, B, C, such that:
B has received twice as much light as A
C has received twice as much light as B
then if the image is linear, we have:
B/A = C/B = constant
(replace the word 'twice' above by any constant)
This happens with most digital raw images -CCD and DSLR raw images-, because they have been acquired with linear sensors. This does not happen with nonlinear media, such as film for example.
If you apply a HistogramTransform with a midtones balance different from 0.5, or a curve with CurvesTransform that isn't a straight line, then your image will no longer be linear.
However, if you manage to find an inverse transformation, then you can put the image back to a linear distribution. Suppose that you have applied a histogram transform with midtones balance = 0.005. Then a midtones balance = 1 - 0.005 = 0.995 is the inverse transformation. With CurvesTransform the situation is more complex, but in theory any curve has an inverse curve that unwises it. Of course, this usually requires working with sufficient accuracy, i.e. with a floating point format or the 32-bit integer format, or roundoff errors may become a serious problem.
In general, a linear image is preferable to work with detail enhancement techniques. Deconvolution must in fact be applied to linear images for physical reasons. As you can see in the last tutorial, ATrousWaveletTransform also performs much better with linear images. Numerically, a linear image is much more controllable because all pixels are mutually linked by the simplest possible function (a straight line).