StarAlignment - Some Usage Tips

Juan Conejero

PixInsight Staff
Staff member
Hi all,

Some users have asked us about the new features we have introduced in the latest version of the StarAlignment tool, and how these features should be used in practice. Here is a brief list with a few recommendations of interest to all PixInsight users.

* Unless it is really necessary, don't use the distortion correction feature. In general, distortion correction is only necessary in two cases:

- Registration of images acquired with different telescopes or lenses.
- Wide field mosaics.

If the images being registered are not subject to differential distortions, applying distortion correction won't provide more accuracy. It will only slow down the process.

* For registration of images from different instruments, try using distortion correction with a projective registration model instead of 2-D surface splines. In most cases, accurate image registration can be achieved with a projective model, which is much more robust and less prone to local deviations than surface splines.

* Distortions caused by dithering of wide-field images can now be fixed with default StarAlignment parameters. Distortion correction isn't normally necessary in these cases.

* Polygonal descriptors cannot work under specular transformations (horizontal or vertical mirror). If you have mirrored images (for example, images acquired with a Newton telescope and a refracting telescope being registered in the same batch), you have two options:

- Either apply the appropriate mirror operation with the FastRotation tool prior to image registration,

- Or use triangle similarity for image registration. On the StarAlignment tool, open the Star Matching section and select "Triangle similarity" as the value of the "Descriptor type" parameter.

* Whenever you use surface splines, check the correctness of your registered images. Surface splines (aka thin plates) are extremely flexible interpolation devices. This flexibility makes them particularly well suited for accurate image registration under strong local distortions, but also poses the risk of local deviations. For example, suppose you have an asteroid in one of your images that appears as a punctual object, and happens to be located very close to a star. The computed position for that star could be slightly displaced due to the proximity of the asteroid. If you register this image using surface splines, the displaced star could induce some distortion on a small area of the registered image. These problems almost never happen in practice, but checking your images is always a good idea. Use the PixelMath process to subtract the registered and reference images and inspect the result. For example, assuming that you have "reference" and "registered" images, this PixelMath expression:

registered -- reference​

computes the absolute value of the difference.

* When available, an accurate distortion model is always preferable to the distortion correction feature. We'll have to wait until our great script developers write a nice distortion modelling tool ;)

* For registration of very low-SNR images, you may need to use the noise reduction parameter (Star Detection section). The new star detection routines are faster and more accurate, but they are not well suited to work under heavy noise. The new noise reduction parameter solves this problem.

* The StarAlignment tool is still undergoing extensive research and development, so you may expect more features and improvements during the next months, especially better distortion correction capabilities. Please feel free to ask for help on this forum if you encounter difficulties to register your images; we'll be glad to take a look and learn from them.
 
Sure, but the *whole* documentation for StarAlignment and ImageIntegration has to be rewritten, so this is not easy/fast...
 
Juan
Thank you for all wonderful options you updated last week.

I would like to know WHY there is a limitation when image are flipped ? As it is mathematics and process, I don't understand the limitation (and why it works for triangle similarity)
There is also problem when FIT format has the 2nd option checked (bottom up /up-botton) where, in this case, it doesn't works like you said.


Anyway, what will be the difference between triangle similarity (like before) and "triangles" to "octagone" options ?
I just tried some example where I got some losange stars when registering in AUTO mode (I need to switch in Bicubic spline mode to get round stars) and with the newest options (from Triangles to Octagones) results seems to be also losange stars but with higher fwhm.

I give you old example AUTO vs "Bicubic Spline"

Capture_20d_E2_80_99_C3_A9cran_202012-12-27_20_C3_A0_2019_45_56.png


I need to go deeper to see what happens with new options and how it runs with my images

Cheers
 
Hi Philippe,

I would like to know WHY there is a limitation when image are flipped ?

To fully understand why this happens, one has to know the ambiguities that arise when building polygonal descriptors, and how they are solved in practice.

Imagine that we have four stars A,B,C,D and have to form a quad with them. The two most distant stars of the set are A and B. Then the quad's local coordinate system is defined by A and B, such that A is at the origin and B is at coordinates x=y=1. The quad's hash code will be composed of the four coordinates of C and D: {XCYCXDYD}.

However, how do we determine the order of the stars? For example, here are two different quads constructed with the same four stars:

quad-1.png
         
quad-2.png
 
Actually, there are two additional possibilities if we swap the C and D stars. With higher dimension descriptors the situation becomes more complicated.

As you see, the hash codes are completely different for the above quads. Both correspond to the same stars, so they should be matched in order to match their stars in the reference and target images. However, they cannot be matched at all, unless we solve all existing ambiguities. To solve them, we impose invariant rules, and force the geometry of every quad to fulfill all of them.

The first invariant solves the ambiguity in the orientation of the coordinate system: We choose A and B such that XC + XD < 1. Note that the quad at the right does not meet this rule, but the quad at the left does.

The second invariant solves the ambiguity in the order of the two inner stars C and D: We sort them such that XC <= XD. Again, only the quad at the left meets this rule. If we swap C and D in the quad at the right, it would meet the second rule but still not the first. So the only valid quad geometry for these four stars is the one shown at the left.

Now don't forget that the purpose of building quad structures is to find matching star pairs in two images that are subject to translation, rotation and scaling (among others). When we find two quads with the same hash code (to within numerical tolerances), one on each image, the probability that the four stars can be mutually matched is relatively high. Now repeat this for a large number of quads formed with all stars detected in both images, and you have solved one of the hardest problems of computational geometry: the point correspondence problem (well, actually, you have just started to solve it, since you still have to deal with our best and inseparable friend: noise, aka uncertainty).

Now imagine that one of the images is mirrored horizontally. It can be easily shown that we could be matching two quads with the same hash code, after applying our two invariant rules, but formed with different stars in both images. In other terms, mirroring creates an additional degree of freedom for which we have no valid invariant rule. The result is that the whole quads-based star matching algorithm fails.

Triangle similarity does not have this problem. Consider the following triangle formed with stars A, B, C:

triangle-1.png

The stars are sorted such that the sides are sorted in decreasing order: AB >= BC >= CA. Then the descriptor is formed with the following triangle space coordinates:

x = BC/AB
y = CA/BC

These descriptors are invariant to translation, rotation and uniform scaling. They are also invariant to mirroring. For example, the following triangle:

triangle-2.png

is a horizontally-mirrored version of the first one. The x,y descriptor coordinates are identical in both cases.

what will be the difference between triangle similarity (like before) and "triangles" to "octagone" options ?

Polygonal descriptors are much more robust than triangle similarity. This is because they have less intrinsic uncertainty. A quad ties two triangles together, so its uncertainty is one half that of a single triangle. In general, an n-sided polygon associates n-2 triangles in an invariant relative position, with an uncertainty reduction factor proportional to 1/(n-2).

Less uncertainty leads to more robust image registration, including the capability to register images under more difficult conditions, as for example mosaics with small overlapping. Polygons are also more flexible structures, and if properly implemented (not only the polygonal descriptors, but the data structures necessary to store, organize and search them efficiently), are comparatively more robust to local distortions and global projective transformations.

Despite this, triangle similarity works pretty well for normal image registration of similar images. For this reason the latest version 1.31 of the BatchPreprocessing script uses it by default. So BatchPreprocessing now fully supports mirrored images.

I give you old example AUTO vs "Bicubic Spline"

I'd need to take a look at the images to understand what happens in this case.
 
georg.viehoever said:
Minimum work would be to add a link to the update information in the forum....
Georg
Good idea. Simple and practical. Perhaps a little messy, but less frustrating that "documentation not available"
Geoff
 
Back
Top