Hi Adam,
However, by matching (fitting) a channel to a reference it will multiply by a number that equals the slope of the reference image. Wouldn't this assign a (meaningless) ratio between images.
To understand how the LinearFit tool works, it is useful to describe the main steps that it performs:
- We have two images, which the tool indentifies as
reference and
target. For simplicity, call these images R and T, respectively. Also for simplicity, assume that the pixels in both images have a single component; the process extends trivially to multichannel data such as RGB color images.
- Generate a set P of pixel value pairs: P := {{r
1,t
1}, ..., {r
N,t
N}}, where each r
i is a pixel of the reference image, and its t
i companion is the corresponding pixel at the same coordinates of the target image. The set P is formed with the subset of pixels of R and T whose values are within the range defined by the
reject low and
reject high parameters.
- Fit a straight line to all the points in the set P. Essentially, we are considering here that the components of each pair {r
i,t
i} are the X and Y coordinates of a point on the plane. The current versions of LinearFit implement a robust fitting algorithm based on mean absolute deviation minimization. With a few changes to adapt it to our platform, the algorithm has been described in Reference 1. The fitted line can be characterized by the usual parameters: Y axis intercept (b) and slope (m): y = mx + b. This linear function attempts to represent the "average difference", so to say, between R and T. For example, if R = T, then we obviously have m=1 and b=0.
- Apply the fitted linear function to all the pixels in T.
The result of this operation is that the target image T is adapted to match the reference image R. In fact, perhaps a better name for this tool would be LinearMatch, instead of LinearFit, because the term
match represents more closely what it actually does. So the key concept here is that we are fitting a straight line to represent the
difference between two images, not the distribution of pixel values in one of the images.
[1] W. H. Press et al. (2007)
Numerical Recipes, The Art of Scientific Computing Third Edition, Cambridge University Press, Sect. 15.7.3, pp. 822-824.