Dynamic Alignment - Why not fully automatic?
Because that is extremely difficult, if not impossible! Notice that the application you have mentioned is not able to perform the same tasks as our DynamicAlignment tool, which is much more flexible.
Let me elaborate a bit on this topic for a better perspective. A feature-based image registration process works by finding points of interest (also known as background control points or alignment features) on two images, which are used as alignment references. Then it has to solve one of the most difficult problems of image processing:
the point correspondence problem. This consists of determining pairs of corresponding points of interest on both images. Once a sufficiently large set of corresponding points have been found, building a geometrical transformation to register the images is relatively straightforward.
To solve the point correspondence problem we need a
robust descriptor for each point of interest. Robust descriptors allow us to characterize a given point of interest with minimal uncertainty. To achieve this goal a descriptor must be invariant to some key geometrical transformations: translation, rotation and scale change as a minimum. The pairs of corresponding points can thus be matched by some kind of classification procedure, and a robust model fitting algorithm such as RANSAC can be applied to remove outliers (false pair matches). In the case of daylight images, each point of interest has a
significant neighborhood, from which a robust descriptor can be built using several image analysis techniques. For example, local gradient and texture information can be used to characterize the surrounding area of each point.
SIFT and
SURF are the two best examples of this class of techniques. SIFT is patented and subject to royalties so we cannot implement it (we cannot pay for it, and on the other hand I am one of those stupid persons radically against patented algorithms). SURF is free and we are currently starting to work on an implementation of it for PixInsight. It will be the basis of a panorama generation tool (no time schedule yet).
However, in the case of stars used as alignment features, we have an interesting paradox:
- Stars are nearly ideal image registration features because they are essentially point-like structures and can be detected very accurately and easily.
- A star is probably the worst possible image registration feature because it lacks a significant neighborhood. A star is basically a point-like structure surrounded by a constant level (the local background). The only significant neighbors of a star are other stars.
In the StarAlignment tool I have implemented an elegant and robust algorithm, where triangle similarity is used to build a robust descriptor for each star. You have complete information and references on the
reference documentation for StarAlignment. As described in the documentation, however, the problem with triangle similarity is that it is intolerant of distortion: it can only be used to define essentially an affine transformation. In my implementation I have pushed triangle similarity to tolerate some global distortion (with a careful triangle generation strategy and a versatile RANSAC routine), but that isn't sufficient to support huge differential distortions such as those involved in mosaics of widefield images.
All the problem reduces to finding more flexible robust descriptors for stars. Note that the SIFT and SURF algorithms, which are the gold standards in panorama generation---maybe there is something more advanced, but I am not aware of it---, allow for a full projective transformation (a homography) instead of an affine transformation. I we were able to find something similar with stars (in both efficiency and robustness) we would take a big step forward. I have some ideas and have made some initial experiments that look promising. Definitely this is one of our main R+D topics for this year.
That said, DynamicAlignment is a completely different beast. It can be used to register two images subject to *any* kind of arbitrary distortion. The only limit is that your eye+brain system must be able to detect at least three star matches between both images. Don't expect this functionality implemented as a fully automatic tool such as StarAlignment. You can expect however a new version of DynamicAlignment with many improvements and a mosaic generation mode. We'll start working on it soon.