New Tool: LinearFit

Juan Conejero

PixInsight Staff
Staff member
Hi everybody,

I have just announced the availability of a new standard tool: LinearFit. The interface of LinearFit is rather simple, as shown below.

04.jpg

Despite the simple interface the process is quite sophisticated and delivers great results, as you'll see in the example below.

LinearFit takes three parameters: a reference image and the two boundaries of a sampling interval in the normalized [0,1] range. When you apply LinearFit to a target image, it computes a set of linear fitting functions (one for each channel) and applies them to match mean background and signal levels in the target image to those of the reference image. Fitting functions are computed from the sets of pixels whose values pertain to the sampling interval. For a pixel v to be used, it must satisfy:

rlow < v < rhigh​

where rlow and rhigh are, respectively, the low and high rejection limits.

Below you can see an example. The image to the left is a single 10-minute shot of the M42 region taken with a Takahashi Epsilon 180ED telescope, 18 cm of aperture at f/2.8. The image to the right is a single 2-minute shot with a Takahashi FS102 refractor, 10 cm at f/8. Both images have been acquired with a modified Canon 300D DSLR camera. Both are raw linear images shown without any STF applied.


Now this is the result after applying LinearFit to match the second image to the first:


Note how the second, short-exposure image has been accurately adapted to match the 180ED long exposure. Naturally, the short exposure has a very low SNR compared to the long one. In fact, this is a nice example to show practically the true meaning of some key concepts, such as signal-to-noise ratio as a function of exposure and field illumination, and image scaling. This is shown, along with the accuracy of the fit, in the comparison below.


LinearFit computes a linear fitting function of the form:

y = a + b*x

The coefficients a and b are calculated using a robust algorithm that minimizes average absolute deviation of the fitted line with respect to the images being matched.

Both the reference and target images must be previously registered. This is necessary because if the images are not aligned, there is no correlation between the two sets of sampled pixels, and the fitted function will in general be wrong or meaningless. You know that this happens when the b scaling coefficient is close to zero or negative, but eventually you can get an apparently reasonable fitting function for dissimilar images.

LinearFit writes some useful information on the console. For example, this is the output generated when I applied LinearFit with the images shown on the screenshot above:

Code:
LinearFit: Processing view: fc2min
Writing swap files...
902.01 MB/s
Sampling interval: ]0.000000,0.920000[
Fitting images: done.
Linear fit functions:
y0 = -0.086368 + 67.236738?x0
?0 = +0.017761
N0 =  35.57% (2859272)
y1 = -0.142312 + 64.684015?x1
?1 = +0.014826
N1 =  36.08% (2900658)
y2 = -0.188776 + 78.253923?x2
?2 = +0.016941
N2 =  35.82% (2879668)

Along with the fitting functions (y = a + b?x), LinearFit informs you about the achieved average absolute deviation (sigma values) and the percentage of pixels that have been sampled to compute the fitting coefficients. The lower sigma the better, as a low sigma means that the fitted line closely represents the set of sampled pixel value pairs. The number of sampled pixels depends exclusively on the sampling interval and on the overlapped region between both images. The example shown here is quite extreme; we know this from the relatively large b coefficients (about 67, 64 and 78 respectively for the red, green and blue channels).
 
Nice
The only "problem" is that you need linear data :) For a long time I was thinking in a Curves based tool that performs the same task for non linear fits. It may even use local adjustments, instead of a global one, to deal with gradients. :)

PS: How is going HDRComposition? :D
 
Hi Carlos,

Thanks.

The only "problem" is that you need linear data

Not actually. The images can be wildly nonlinear; as long as the differences between the reference and target images can be modelled reasonably well with a straight line, LinearFit will do a fine job.

PS: How is going HDRComposition?

Shhhh!!!!  ::)
 
So, it is not going :D
Are you implementing Vicent's method, or the paper we were looking at a long time ago? (the one cited by Fattal, from the GDHDRC algorithm)
 
thanks - it looks a wonderful tool

now i havent quite grasped the concept fully of linear data is - can you please re-explain it? :-[
 
This looks great - thanks.

I assume that when using on mosaics with small overlaps (e.g. 5%) I just need to define a preview in each image for the overlapping areas?

Thanks
John Murphy
 
Vicent,

Gave it out as bonus at the Adler conference. It looks like a great tool. I think it will be release in a few days.
Btw, We are all having a great time at the conference.
We have a people from both the Spitzer space telescope  and the Chandra telescope attending.

Max
 
Hi h0ughy,

the concept fully of linear data is - can you please re-explain it?

When we speak of a linear image or linear data, we refer to the relationship between pixel values and intensity of incident light. When an image is linear, that relationship is just a straight line.

A simplified but valid example: In a linear image, if a pixel A has a value x and another pixel B has received twice the light that A has received, then B has a value of 2x. If you plotted all pixel values versus intensity of incident light on a graph, the set of plotted points would approximate a straight line.

Digital image sensors have approximately linear response to incident light, including both CCD and CMOS sensors. Linearity of the raw data is very important, and it is in fact one of the most important factors that define the true power of digital photography. Linearity allows us to implement and achieve things that otherwise (that is, with nonlinear data) would be impossible or extremely difficult. For example, deconvolution can only be applied to linear images, because in a linear image the point spread function (PSF) is constant for all pixels (assuming an isotropic PSF). For the same or similar reasons, many procedures and algorithms, as wavelet-based transformations for example, work much better and are much more controllable with linear images.

Hope this helps as a brief and superficial, but hopefully useful, explanation.
 
Hi John,

I assume that when using on mosaics with small overlaps (e.g. 5%) I just need to define a preview in each image for the overlapping areas?

That works for the StarAlignment tool. You can define previews roughly covering the overlapping areas to facilitate StarAlignment's work in difficult cases. However, you normally won't need to do that; just apply StarAlignment and it will be able to detect the intersection between the images automatically. Previews are only necessary, in general, when the overlapping is below a 2% - 1% or so.

However, note that this has nothing to do with the new LinearFit tool. LinearFit requires that the images are previously registered to work correctly (that is, to provide a meaningful result). LinearFit will compute and apply a linear fitting function to match its target image to its reference image. The basic task to achieve with LinearFit is equalizing a set of images in mean background and signal levels.
 
Juan,

let me see if I understand how LinearFit works:
I have a few (33) frames of M31 taken at a Dark Site.
I then have 22 frames of Red, Green and Blue taken in light polluted backyard - binned 2x2.

I need to use LinearFit to "scale" the light polluted frames to the dark site ones. Correct?

If yes, when will I need to use LinearFit during the workflow ? and do I have to apply LinearFit to all the frames or just the MasterStack of the individual color channels?

Thank you very much,
E.
 
Juan Conejero said:
Hi h0ughy,

the concept fully of linear data is - can you please re-explain it?

When we speak of a linear image or linear data, we refer to the relationship between pixel values and intensity of incident light. When an image is linear, that relationship is just a straight line.

A simplified but valid example: In a linear image, if a pixel A has a value x and another pixel B has received twice the light that A has received, then B has a value of 2x. If you plotted all pixel values versus intensity of incident light on a graph, the set of plotted points would approximate a straight line.

Digital image sensors have approximately linear response to incident light, including both CCD and CMOS sensors. Linearity of the raw data is very important, and it is in fact one of the most important factors that define the true power of digital photography. Linearity allows us to implement and achieve things that otherwise (that is, with nonlinear data) would be impossible or extremely difficult. For example, deconvolution can only be applied to linear images, because in a linear image the point spread function (PSF) is constant for all pixels (assuming an isotropic PSF). For the same or similar reasons, many procedures and algorithms, as wavelet-based transformations for example, work much better and are much more controllable with linear images.

Hope this helps as a brief and superficial, but hopefully useful, explanation.




so, what processes keep the data linear and at what stage does the line get crossed?
 
I thought HDRWT should only be applied on non-linear images, right?

A fair comment - but I think that the point being made was that application of HDWRT to a linear image will result in a non-linear image.

As Carlos says, if you get the image out of the 'confines' of its original linear space first, HDWRT has more 'meat' to work its magic on.
 
Yes, but it will still be linear locally. This means, some processes like Deconvolutions still have some meaning with HDRWT applied to raw/linear data. I think Juan made an example of this some time ago.
 
Joan, Vicent and the other pi-workers:

Thanks for this fantastic HDRcomposition. I didn't see at September but I work with it today. Fantastic progres for my images!!

Thanks

Jos? Manuel P?rez Redondo
 
Back
Top