Hi Philippe,
I would like to know WHY there is a limitation when image are flipped ?
To fully understand why this happens, one has to know the ambiguities that arise when building polygonal descriptors, and how they are solved in practice.
Imagine that we have four stars A,B,C,D and have to form a quad with them. The two most distant stars of the set are A and B. Then the quad's local coordinate system is defined by A and B, such that A is at the origin and B is at coordinates x=y=1. The quad's hash code will be composed of the four coordinates of C and D: {X
CY
CX
DY
D}.
However, how do we determine the order of the stars? For example, here are two different quads constructed with the same four stars:
Actually, there are two additional possibilities if we swap the C and D stars. With higher dimension descriptors the situation becomes more complicated.
As you see, the hash codes are completely different for the above quads. Both correspond to the same stars, so they should be matched in order to match their stars in the reference and target images. However, they cannot be matched at all, unless we solve all existing ambiguities. To solve them, we impose
invariant rules, and force the geometry of every quad to fulfill all of them.
The first invariant solves the ambiguity in the orientation of the coordinate system: We choose A and B such that X
C + X
D < 1. Note that the quad at the right does not meet this rule, but the quad at the left does.
The second invariant solves the ambiguity in the order of the two inner stars C and D: We sort them such that X
C <= X
D. Again, only the quad at the left meets this rule. If we swap C and D in the quad at the right, it would meet the second rule but still not the first. So the
only valid quad geometry for these four stars is the one shown at the left.
Now don't forget that the purpose of building quad structures is to find matching star pairs in two images that are subject to translation, rotation and scaling (among others). When we find two quads with the same hash code (to within numerical tolerances), one on each image, the probability that the four stars can be mutually matched is relatively high. Now repeat this for a large number of quads formed with all stars detected in both images, and you have solved one of the hardest problems of computational geometry: the point correspondence problem (well, actually, you have just
started to solve it, since you still have to deal with our best and inseparable friend: noise, aka uncertainty).
Now imagine that one of the images is mirrored horizontally. It can be easily shown that we could be matching two quads with the same hash code, after applying our two invariant rules, but formed with different stars in both images. In other terms, mirroring creates an additional degree of freedom for which we have no valid invariant rule. The result is that the whole quads-based star matching algorithm fails.
Triangle similarity does not have this problem. Consider the following triangle formed with stars A, B, C:
The stars are sorted such that the sides are sorted in decreasing order: AB >= BC >= CA. Then the descriptor is formed with the following
triangle space coordinates:x = BC/AB
y = CA/BC
These descriptors are invariant to translation, rotation and uniform scaling. They are also invariant to mirroring. For example, the following triangle:
is a horizontally-mirrored version of the first one. The x,y descriptor coordinates are identical in both cases.
what will be the difference between triangle similarity (like before) and "triangles" to "octagone" options ?
Polygonal descriptors are much more robust than triangle similarity. This is because they have less intrinsic uncertainty. A quad ties two triangles together, so its uncertainty is one half that of a single triangle. In general, an n-sided polygon associates n-2 triangles in an invariant relative position, with an uncertainty reduction factor proportional to 1/(n-2).
Less uncertainty leads to more robust image registration, including the capability to register images under more difficult conditions, as for example mosaics with small overlapping. Polygons are also more flexible structures, and if properly implemented (not only the polygonal descriptors, but the data structures necessary to store, organize and search them efficiently), are comparatively more robust to local distortions and global projective transformations.
Despite this, triangle similarity works pretty well for normal image registration of similar images. For this reason the latest version 1.31 of the BatchPreprocessing script uses it by default. So BatchPreprocessing now fully supports mirrored images.
I give you old example AUTO vs "Bicubic Spline"
I'd need to take a look at the images to understand what happens in this case.