Linear vs non-linear images and what processes alters them


Well-known member
Some general question(s) on working with linear vs non-linear images.

Which processes work better with a linear image and what processes change a linear image into a non-linear image.

Obviously a histogram stretch is going to change a linear image to a non-linear image (and I assume that it's the use of the mid-tone adjustment that causes the non-linearity of the result).  But what other processes out there might do the same (as a "hidden" effect)?

The question comes from a comment in one of the processing examples The Region Around NGC 7000 and IC 5070: ATrousWaveletTransfrom and HDRWaveletTransform in PixInsight

This is very important. Note that all procedures applied so far are purely linear transformations. For deconvolution, the linearity of the data is actually indispensable. Wavelet transforms and most multiscale techniques are much more controllable for linear images. For example, working with a linear image, we can apply aggressive biases to small-scale wavelet layers, as in the example illustrated in Figure 17, without burning out the stars and other bright features. As a general rule, try to respect the linearity of the data as far as possible, until the last stages in your processing workflow.
Hi Cheyenne, to my understanding and what I consider linear processes are:

- Registration
- ScreenTransformation
- Deconvolution
- DBE (when done correctly)
- Background neutralization
- Color calibration
- Re-sampling
- Cropping
- SampleFormatConvertion
- UnsharpMask if you select the linear option and set the working workspace gama to 1.0

Wavelet HDR and DarkStructureEnhance for example will work a lot better after a non linear strech (histogram/curve) transformation. I think AtrousWavelet is a special case that may be considered a linear transformation in certain case but I am still learning about this filter so I am not sure yet how to verify this one.


Well.. technically ScreenTransformation doesn't "touch" the image at all.  It's for displaying the image, so I don't think it "counts" <grin>

It's interesting that you say that wavelet processes work better after a non-linear image which is different from the one tutorial says (see the quoted text) -- which was the reason for my asking the questions because I've read both and I'm confused. <grin>
Correct, ScreenTransformation does not change the data but still a process by definition, so I listed it :) Based on my experience, I only get good results with wavelet hdr after a histogram or curve transformation. Just try to apply a wavelet HDR after a screen strech transformation only and see the result.


This is an really interesting question because I found most of the function are working better after a non-linear stretch.
Background correction for example seems to be better in non linear than linear mode
Also all wavelet process are the best AFTER non linear stretch.
Also ffor mask operations

Maybe it is due to the result we want.
If we want a light processing (just improve a little the image), the linear mode can be better to preserve dynamic range, but if we want a strong process, the non linear mode can offer you the full range of process along the dynamic range. 

It is my conclusion after made lot of tests. Now I start process by a non linear stretch.

But that right it a strange feeling !
For me, I try to keep my data linear as long as possible. When my data is linear I have very good results with de-convolution, DBE, color calibration and background neutralization. I need to do a non-linear strech before Wavelet HDR.

This is an interesting topic; sorry for taking long to chime in on this.

Some facts:

- Deconvolution can only be applied (with a physical basis) to linear data. The reason is simple: if a nonlinear transformation is applied to the data, then no PSF can be valid for all pixels. In PixInsight, this applies directly to the Deconvolution and RestorationFilter tools.

- Wavelet transforms are, in general, more controllable with linear data. With nonlinear data, it is very easy to reach saturation of bright features before achieving the required local contrast enhancement (by increasing some layer biases). Our ATrousWaveletTransform tool has a way to prevent this, to some extent (Dynamic Range Extension), but at the cost of a lack of dynamics. The noise reduction algorithms implemented in the latest versions of this tool also work much better with linear data. Linear data is prone to generation of ringing artifacts, but we now have the right tools to fix them (the Deringing section of ATrousWaveletTransform), so this is not a practical problem with PixInsight.

Notable tools and algorithms that don't work with linear data:

- HDRWaveletTransform. In general, this tool requires a previous nonlinear transformation. The reason is in the nature of the algorithm itself. A high dynamic range problem can only be stated and solved in a nonlinear space, at least with the algorithms and implementations currently available.

- ACDNR. Edge protection does not work correctly with linear data. Note that this is a limitation of the implementation, not of the algorithm. In the future I must rewrite this tool from scratch, and one of my goals will be proper handling of linear image data.

- GREYCstoration. It seems that the implementation of GREYCstoration that we are using (the reference implementation by David Tschumperl?, the creator of the algorithm) cannot build diffusion tensors properly with linear data. I want to contact David (who kindly supervised my implementation of GREYCstoration as a PixInsight module) regarding this topic. The reason is not actually because I want to apply GREYCstoration for noise reduction to linear data (mainly because it is an anisotropic algorithm), but because it can also be used as a wonderful inpainting and upsizing tool. See David's demonstration page for more information. Note that with the inpainting capabilities of this algorithm we could implement an excellent bloom removal tool.

Just some information to feed thoughts...
Juan, not a problem... you've been busy :)

So.. it almost sounds like the "correct processing" order (if there is such a thing), assuming all the calibration/registration/integration steps have been done (and also dealing with functions that actually alter the data, so the ScreenTransfer function doesn't count).

Steps that need to be done before the linear to non-linear transformation

Background extraction (DBE)
Background normalization
Color Correction
ATrousWavelet processing

Things that change linear to non-linear

Curves adjustments
Gamma correction
Histogram stretch

Steps that work better after the linear to non-linear transformations


Steps that really don't care
Cropping, rotations, and other steps that only alter the physical shape of the image.

And finally just to make sure that I'm on the right page, the linearity that we are discussion is the mapping in a color space (for a monochrome image the mapping the value of a pel of 0.0 being black and 1.0 being white and the.g. 0.0 = black and 1.0 is white for the greyscale color space).

It's my understanding that the definition of a linear operations is that the equation is a 1 degree polynomial and any functions must meet the following

f(x+y) = f(x) + f(y) and f(ax) = af(x)

Thanks Juan, this confirm my empirical observations and my above post regarding linear processes. To be more clear on Cheyenne question regarding linearity function f(x+y) = f(x) + f(y) and f(ax) = af(x),
for me, any functions that respect  the property of Additivity and Homogeneity are by design linear so if you apply a "transformation" it should be easy to test that the result for linearity by testing both side of the equation.
Also, I did not mention it my above post but any conversion for one color space to another should be also considered linear functions.
Correct me if I am wrong.
Matter of fact, rotation and scaling of an image may be also considered non-linear depending on interpolation method (such as a gaussian)


Your question has me thinking...

In a purely mathematical sense, I would believe that rotation and scaling of an image would not alter the linearity of the image within the color space.  Remember that the linearity that is being discussed is the values within color space. 

Theoretically you could string all the individual pels into a one dimensional array and sort it and still not alter the linearity of the image.  The histogram of this sorted image would not change.  In fact this is exactly how the histogram itself is created.

Now in the practical/realworld sense, scaling and rotation I suspect might alter the linearity within the color space if it should remove or add pels (due to interpolations)  that might alter the histogram of the image in a non-linear fashion.
I think scale and rotation are linear. However, they do add a certain amount of "mathematical" noise to the image, because they need to "guess" (interpolate) values from the  source pixels to create target pixels. And this always has a certain degree of imperfection. So my gut feeling is: leave them for a stage as late as possible in the processing.

Yes -- I agree that one should leave the steps of rotation and scale towards the end.. however... the registration process might utilize both rotation and scaling in order to proper align images, especially of there is some field rotation involved within the images, or in the process of registration of images in building a mosaic.
Hi All,

That is also why I recommend leaving the deBayering process as late as possible in the calibration process. If you deBayer all of your data prior to calibrating with Darks, Flats, FlatDarks and BiasOffsets, then you are introducing noise too early in the sequence.

That is why I don't like the 'pixel-squaring' algorithms that some (well-known) capture packages 'force' on you right at the start of processing. Pixel-squaring is 'trivial' and can (should, IMHO) be left right to the very end.

The only step that I have 'played with' - and with reasonable results, I might add - was to 'upsize' my starting image (after calibration, deBayering, etc.) right before starting out with PI post-processing. My thoughts were that a simple 2x upsample (or, better, 3x upsample) just presented the PI work engine with 'more space' to work its magic.

Yes, there is the inherent penalty of greater memory requirements and more execution time for some processes, but modern multi-core processors and cheap memory go a long way to help that downside.

And then, at the end of the main processing sequence, a 2x (or 3x) 'down-sample' seems to eliminate one last layer of noise, and one last layer of 'star-bloat'.

So, am I just imagining this to be worthwhile, or is there some validity in making things twice/thrice (or "two to the power of twice/thrice") as difficult in order to achieve a 'snappier' final image once it has been returned to its original size?

As implemented in PixInsight, pixel interpolation is a linear operation, irrespective of the applied interpolation function or filter.

For example, consider bicubic interpolation:


In this figure, large dots represent existing image pixels. A new pixel value is desired at the location represented by the small dot, defined by its offsets dx and dy measured from the nearest pixel by coordinate truncation.

Bicubic interpolation algorithms interpolate from the nearest sixteen mapped source pixels, as shown above. Without entering more technical descriptions (which I can do if you want), the bicubic spline interpolation algorithm in PixInsight implements a convolution interpolation.

Simplifying, the 16 source pixels are weighted (multiplied) by an interpolation function, which is centered on the interpolation point and discretized at integer image coordinates. This forms an interpolation filter. In the figure above, there would be a filter element for each large dot. The sum of the 16 weighted source pixels is divided by the sum of the 16 filter elements (so that the interpolation preserves flux), and the result of this division is the interpolated pixel value. Note that this is just a convolution with a 4x4 kernel filter.

In general, with the obvious exception of bilinear interpolation, interpolation functions are nonlinear. However, the interpolation process is linear, irrespective of the interpolation function. This is because interpolation is carried out through convolution (in PixInsight), and convolution is a linear operation. In the bicubic case described above, for example, the interpolated value is a linear combination of 16 original pixel values, so the linearity (or nonlinearity) of the interpolated data with respect to incident light does not change. This has nothing to do with the interpolation function; what defines the linearity of convolution is that it consists of linear operations exclusively (multiplications and additions, and division by a constant value).

So don't worry about image registration: if you register raw linear images, the registered images are also linear. This also applies to all geometric transformations: translation, rotation and scaling are linear because pixel interpolations are linear.


Thanks Juan  ??? - however, it is 02:30 now, so I will hit the textbooks again tomorrow (I'm just in from a brief but pleasant visit to the Bubble Nebula). I think I am following you, but more caffeine would help  ::)