Author Topic: StarAlignment (automatic image registration) Beta 1 Now Available  (Read 19602 times)

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Hello everybody,

I am glad to announce that a first version of the new StarAlignment tool is now available to our users.

This first version is still at a beta development stage, but it is fully functional and can be safely used for production work. We expect to improve this tool greatly with the help of our users. We look forward to bug reports, failure reports, suggestions and examples of use of any kind.

StarAlignment is an automatic image registration tool that uses stars detected on images as alignment references. Hence this tool is appropriate to register deep-sky images in a completely automatic way. The user can interact with the tool to evaluate the suitability of parameters, but no user intervention is required during the image registration process.

StarAlignment is able to register two images in presence of arbitrary translation, rotation, scaling, and moderate local distortion. It can also deal with slight global (large-scale) distortions, as well as slightly different scaling ratios on each axis. StarAlignment requires at least six matched star pairs between both images, but it is able to work with a maximum of 65536 matched star pairs (user-selectable, but limited to 4000 star pairs by default).

StarAlignment uses variations of the same fundamental algorithms (see References [1] and [2] at the end of this document) implemented in DeepSkyStacker, a wonderful application for image calibration written by Luc Coiffier. I want to express my gratitude here to Luc Coiffier for heading me in the right direction, and for his unvaluable help and support.


Download Addresses

For Linux/X11 64-bit:
http://pixinsight.com/export/ImageRegistration-pxm.x11.x86_64.20081001.tar.gz

For MS Windows XP/Vista/2003 32-bit:
http://pixinsight.com/export/ImageRegistration-pxm.win.x86.20081001.zip

For MS Windows XP/Vista/2003 64-bit:
http://pixinsight.com/export/ImageRegistration-pxm.win.x86_64.20081001.zip

* Note: the X11/32-bit module will also be available as soon as possible.


How to Install the Module

1. Exit all instances of the PixInsight Core application, if you have some running.

2. Extract the module file from the appropriate compressed file (see previous section). On Windows, the module file is:

ImageRegistration-pxm.dll

and on Linux:

ImageRegistration-pxm.so

3. Copy the new module file to your bin installation folder. On 32-bit Windows, PixInsight installs by default on:

C:\PCL

and on 64-bit versions of Windows, on:

C:\PCL64

If you didn't change the default installation directory, you just have to copy the new ImageRegistration-pxm.dll file to either C:\PCL\bin or C:\PCL64\bin.

On Linux, the installation directory is completely up to the user. Assuming a standard installation on a local directory, you must copy the ImageRegistration-pxm.so file to:

/home/<your_personal_directory>/PCL/bin

4. Run PixInsight. Open the Process Explorer window. Under the ImageRegistration category you should see the StarAlignment process available with a default icon. Of course, it is also available under the Process > ImageRegistration main menu item.


Using StarAlignment

StarAlignment is very easy to use. The StarAlignment interface includes many tool tip windows that appear when you move the mouse cursor over its controls. You can read these tips to get a quick help on the different parameters.

The first step is obviously to open the two images that you want to register. One of them is the reference image, which you must select as such on the StarAlignment interface. This is the "base" image, with respect to which the target image is to be translated, rotated, rescaled, and warped, as necessary to match both images by superposition.

You apply StarAlignment just as any process in PixInsight: drag an instance from the interface to the target image that you want to register over the reference image, and wait (you drag an instance by clicking and dragging the small triangular button at the lower left corner of the StarAlignment interface). Alternatively, you can select the target image and press F5 with the StarAlignment interface active. This works just as usual in PixInsight.

The default parameters seem to work pretty well with most linear raw images. The most important and critical set of parameters is in the Star Detection section:

- Detection scales. This is the number of wavelet layers used to detect stars on both images. More scales means bigger stars detected. You usually don't want to detect too large stars since they may work poorly registration references. The default value of five scales seems to work well most times.

- Noise scales. This is the number of wavelet layers used to perform noise reduction. This is to prevent including very small image structures in the detection phase, which may lead to detection of false stars. Increase this parameter, if necessary, for noisy images.

- Minimum star peak value. Stars whose brightest pixel is under this value will not be detected. You can use this parameter to control detection of faint stars, if necessary.

- Maximum star distortion. Star distortion is measured with respect to a square, whose distortion is unity. Lower values mean more distortion. The distortion of a perfectly circular star is pi/4, or roughly 0.75. Stars whose measured distortion is larger than the value of this parameter will not be detected.

- Inverted image. Select this option to register negative images (dark stars over a bright background).

Also interesting -and we hope, useful- is the Working mode parameter. There are two image registration modes: match mode (the default) and mosaic mode. I think these modes mostly explain by themselves.

In match mode, the output (registered) image will always have the same geometry as the reference image. The target image will be transformed (translated, rotated, etc.) to match the reference image, but then it will be cropped to the reference dimensions. This is the "normal" image registration mode. For example, you can readily integrate (average) images registered in this way.

In mosaic mode, both images -reference and target-, are placed on a bigger canvas. The reference image is never interpolated: its pixels are just copied "in place" with respect to the target image, which is then transformed (translated, rotated, ...) and interpolated to match it. The result is a two-frame mosaic over a black canvas.

In addition to image registration modes, StarAlignment also offers four additional "control modes":

- Structure Map. In this mode, you'll get a binarized image (only pure black and pure white) whose white pixels correspond to image pixels that have been identified as significant structures by the star detection algorithm.

- Structure Detection. Similar to Structure Map, but instead of a binarized image you get actual image pixels only on detected regions.

- Detected Stars. In this mode, small cross marks are drawn on a copy of the target image. Each mark is centered on the computed centroid of a detected star (actually, the algorithm computes barycenters, which are more significant to the image registration task).

- Matched Stars. Similar to Detected Stars, but only those stars that have been matched between both images are drawn. Matched star pairs are used to compute the transformation necessary to register the images.

The rest of parameters are quite advanced, and you usually shouldn't need to change them. To change them, in fact, you should have at least a basic knowledge of some of the algorithms used by StarAlignment - see the technical description at the bottom of this document if you are interested. The only exception is the Use brightness relations parameter. When this option is enabled, StarAlignment will use not only geometrical relations between stars, but brightness relations as well. Since this may cause problems with images taken at different wavelengths (e.g., individual RGB channels), we have left it disabled. However, you may want to activate this option to improve performance of the star matching algorithm for regular image registration tasks (i.e., all images taken at the same central wavelengths).


Upcoming Projects

StarAlignment is the first of a series of image registration tools that I am working on now. The lack of automatic image registration tools has been an important gap in the set of standard processing tools available on the PixInsight paltform, and this new series is aimed at fixing it definitely.

These are the planned registration tools:

- BatchStarAlignment. A version of StarAlignment designed to register a list of image files, instead of views.

- FFTAlignment. A Fourier-based, featureless automatic image registration tool that will work extremely well for lunar images, as well as for non-astronomical images, including mosaic construction. This tool will be able to register in presence of arbitrary translation and rotation, but will only tolerate minimal scale differences. I am trying to overcome this limitation, though.

- (Process name still not decided). A general-purpose, feature-based image registration tool based on corner detection. It will work basically as StarAlignment, but using corners instead of stars as registration references, and local phase correlation instead of triangle similarity.

Of course, DynamicAlignment will remain as a high-precision, manual image registration system. It will also benefit from the research work being done to develop the rest of registration tools.


Technical Description and Reference Information

StarAlignment implements a bunch of sophisticated algorithms to carry out the critical task of image registration in a completely automatic way. This is not an easy task, by no means. In fact, in my opinion this is one of the most complex and difficult problems of image processing, and offering a significant contribution is really challenging. This is precisely what I have tried to do with StarAlignment; you'll say if I've managed to achieve my goal.

The StarAlignment process can be divided into several logical blocks, which I'll describe briefly in the rest of this document.

Star Detection

This is a critical part of the image registration task. If stars are not detected efficiently, the whole process will fail, or it will perform poorly. We need to detect as many stars as possible, but at the same time we want to be immune to false "star-like" structures (e.g., hot pixels) and small nonstellar features, as much as possible. The noise can be also a serious problem here, and for this reason I have implemented a detection algorithm based on multiscale (wavelet-based) techniques. I am not happy at all with the star detector that I have implemented so far, so expect a lot of changes and improvements in this part of the process. Basically, the current detector is fast and works quite well for most linear images that have not too much noise, but we definitely need a more robust and reliable algorithm.

Star Matching

Once we have a list of stars for each image, we must find matching pairs of stars. This is a classical -and extremely difficult- problem: the point correspondence problem. Fortunately, stars are nearly ideal features as image registration references, since a star is very easy to be identified as a significant image feature. Contrarily, a generic image registration system, designed to work with diurnal (non-astronomical) images, needs to find and use significant features under a much higher level of uncertainty.

For example, an all-purpose image registration system typically uses a corner detector to find significant features, but corners are very difficult to match between images, due to the high uncertainty inherent to their detection. To address this problem very sophisticated feature recognition algorithms have been developed during the last years. Probably the best example of a state-of-the-art panorama generation software implementing one of these new techniques is AutoStitch, whose efficiency is mainly due to the SIFT feature recognition algorithm created by M. Brown and D. Lowe, of the University of British Columbia. You can read the original paper here if you feel so inclined:

http://www.cs.ubc.ca/~mbrown/papers/ijcv2007.pdf

But for deep-sky astrophotography we can live quite well limiting ourselves to stars as registration features, right? Yes, of course, and this is one of those *extremely rare* cases where astronomical images are a lot easier to deal with than diurnal images. So we definitely have to take advantage of this event :) I have implemented a variation of the basic algorithm described by F.G. Valdés et al. [1], and further refined by M. Marszalek and P. Rokita [2]. These elegant algorithms are based on triangle similarity. The basic idea is that the relationships between the sides of triangles built with the same stars on both images are invariant to some affine transformations of particular interest: translation, rotation, and scale variation (as long as scaling ratios are identical in both axes). So by identifying similar triangles in both images we can also match pairs of stars between them.

My implementation, however, differs from the original star matching algorithms that you can find in References [1] and [2]. The original algorithms haven't been designed for image registration purposes, but to match stars acquired on CCD images with star catalogs, as part of automated sky survey systems. For this reason the original algorithms build and compare all existing triangles between a very reduced set of stars used as registration references. There are N*(N-1)*(N-2)/6 triangles definable with N stars, which implies that the whole star matching task is roughly an O(N^3) problem when using all triangles. In practice this limits the number of stars used for registration to no more than 40 or 50, which is clearly insufficient for many practical image registration purposes. For example, when building mosaics, the reference and registered images usually share a small fraction of the detected stars, so limiting to the 50 brightest stars is simply inapplicable. In my implementation, I build all existing triangles only when the number of detected stars is 20 or less. With more than 20 stars, a limited number of triangles are built for each detected star. For any given star, n triangles are built with its 2*n nearest neighbor stars, where n is a user-definable parameter that defaults to eight triangles per star. This generates a very large but still manageable set of triangles. Besides reducing the complexity of the whole process, which allows us to use thousands of stars instead of several tens, building a limited number of triangles based on star proximity has an additional advantage: since we are favoring comparisons between small-scale triangles, our version of the algorithm can deal much better with moderate amounts of global distortion. By using mostly large-scale triangles between a reduced number of bright stars, the original matching algorithm is much more intolerant of distortion.

Along with a different triangle construction strategy, my implementation uses a completely different mechanism to ensure robustness of the star matching process. Robustness here means an extremely high resistance to false matches. After the initial star matching routine based on triangle similarity, we have a list of putative star pair matches. However, in general not all of these matches are true correspondences between the same stars in both images -hence the word putative here. It is crucial to the accuracy of the whole image registration system to detect and remove all false matches, or outliers, and keep only true star pair matches, which we can call inliers in this context. Different strategies have been proposed to carry out this task. The authors of the original papers have used rejection methods based on a median transformation derivation [2] and iterative sigma-clipping [1]. Personally, I don't like those methods. With the purpose to build a rock-solid and accurate image registration system, I have implemented a custom adaptation of the RANSAC (RANdom SAmple Consensus) algorithm. You have a good description of RANSAC on the Wikipedia: http://en.wikipedia.org/wiki/RANSAC. The original description of the algorithm by M. Fischler and R. Bolles (1981) is in Reference [3].

Estimation of Transformation Parameters

My implementation uses a projective coordinate transformation to perform the image registration task. A projective registration model fully captures all affine transformations (translation, rotation, scaling and shearing), plus chirping (change in spatial frequency with position) and keystoning (converging lines), while preserving straight lines. An excellent discussion on this topic can be found on Reference [4]. A projective transformation uses eight parameters instead of the six used by an affine model, so we need at least four star pair matches to compute it. Of course, when only three pair matches are available (the minimum required by the StarAlignment process), an affine transformation is automatically used (actually, a projective model is simulated by duplicating one of the three star pairs).

Basically, RANSAC is used to estimate the eight transformation parameters and to perform robust rejection of outliers (false star pair matches) at the same time. My implementation of RANSAC seems to be very accurate and fast, especially on multicore/multiprocessor systems (it can use all available processors). It can deal easily with several thousands of star pairs and can tolerate a significant fraction of outliers (more than a 40% in the tests that have been performed).

The output of the RANSAC routine is the 3x3 transformation matrix of a projective model fitted to the largest set of confirmed star pair matches found, plus estimates of the root mean square error and peak errors in the computed transformed coordinates. To obtain the transformation parameters, we can consider fitting the projective model with a nonlinear least squares method. For example, the Levenberg-Marquardt algorithm suggests itself. However, I have preferred a different way to solve this problem, based on the direct solution of the system of N linear equations (N = the number of star pairs) and the SVD (Singular Value Decomposition) algorithm. This is the normalized eight-point method [5] and the direct linear transformation (DLT) algorithm [6].

Interpolation

The final step of the image registration process is to interpolate the target image to register it over the reference image. StarAlignment uses the projective model computed in the previous step to calculate transformed coordinates, and an efficient implementation of bicubic spline interpolation [7] to generate pixels of the registered image. This part of the process has been parallelized and can use all available processors.

An important feature of our implementation of the bicubic spline interpolation algorithm [7] is a special clamping mechanism to avoid oscillation problems due to wild variations between adjacent pixels in raw CCD images. Basically, when two adjacent pixels differ by more than a prescribed threshold, the clamping mechanism acts by forcing a lower order interpolation (quadratic or linear), instead of trying to fit a cubic function. For example, suppose that two contiguous pixels have values 0.10 and 0.95 (in the [0,1] range), which happens quite frequently in linear raw images (e.g., in the vicinity of a star). Trying to perform a cubic interpolation between those values is an error, and will lead to a negative value that will be represented as pure black after truncation to the [0,1] range. This never happens with our implementation (it does happen with other packages).

References

[1] F.G. Valdés et al. (1995) FOCAS Automatic Catalog Matching Algorithms. Publications of the Astronomical Society of the Pacific, 107:1119-1128.
http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1995PASP..107.1119V&amp;data_type=PDF_HIGH&amp;whole_paper=YES&amp;type=PRINTER&amp;filetype=.pdf

[2] M. Marszalek, P. Rokita (2004) Pattern Matching with Differential Voting and Median Transformation Derivation. Computer Vision and Graphics (Computational Imaging and Vision Series), 32:1002-1007.
http://lear.inrialpes.fr/pubs/2004/MR04/MarszalekRokita-ICCVG04-DifferentialVoting.pdf

[3] Martin A. Fischler and Robert C. Bolles (1981). Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Communications of the ACM 24:381-395.

[4] Steve Mann, Rosalind W. Picard (1997) Video Orbits of the Projective Group: A Simple Approach to Featureless Estimation of Parameters. IEEE Transactions on Image Processing, 6:1281-1295.

[5] Richard I. Hartley (1995) In defence of the 8-point algorithm. Proceedings of the Fifth International Conference on Computer Vision, 1064-1070.

[6] A. Agarwal, C. V. Jawahar, P. J. Narayanan (2005) A survey of planar homography estimation techniques. Tech. Rep. IIIT/TR/2005/12, International Institute of Information Technology, Hyderabad, 2005.

[7] R. G. Keys (1981) Cubic Convolution Interpolation for Digital Image Processing. IEEE Transactions on Acoustics, Speech & Signal Processing, 29:1153-1160.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/