New Plate Solving Distortion Correction Algorithm in PixInsight

By Juan Conejero (PTeam)
Published May 15, 2019


Introduction

The Image Plate Solver script was authored and first released back in 2012 by Andrés del Pozo, a Spanish software developer and PTeam member, who has been improving and maintaining it ever since. Along with this crucial script, Andrés has contributed many astrometry-related tools to PixInsight, such as the excellent AnnotateImage script, as well as most of the source code base behind our current astrometry engine in the PixInsight/PCL platform.

For the latest version 1.8.6 of PixInsight I have been working on an improved local distortion modeling algorithm for our astrometric solutions, which is now available since version 5.3 of the Image Plate Solver script. As in previous versions, we use two-dimensional approximating surface splines, also known as thin plate splines, to model field distortions. A thin plate spline describes the minimal-energy bending of a thin sheet of metal passing through a set of interpolation points in three-dimensional space. This physical analogy gives thin plate splines accuracy and adaptability properties that we have been applying successfully to many data modeling tasks, including very especially plate solving.

The main problem with thin plate splines is that their computation requires solving large, dense linear systems. Unfortunately, this task has about O(n3) time complexity. For sets of 3000 stars approximately, the problem becomes hard to manage, and above 5000 stars it is almost impractical for interactive applications, even with fast desktop hardware. For this reason, in previous versions we had to impose a severe limit to the maximum number of stars used to compute astrometric solutions with distortion correction enabled. This has been degrading the accuracy of our implementation, since much more stars are often required to model distortions correctly throughout the entire field, especially for wide and medium field images. For the new astrometry-based mosaic generation tool on which we are working as of writing this document, this limitation is a serious problem.

In the new version of our plate solver script we overcome this problem successfully using algorithmic devices known as shape-preserving surface simplifiers. Basically, these algorithms apply principal component analysis to analyze a surface defined by a set of arbitrary three-dimensional points (in this case, star positions measured on the image and their corresponding celestial coordinates), removing all unnecessary points, and keeping only the points strictly necessary to represent the surface while preserving its shape up to a prescribed tolerance. In this way we can work with a small subset of stars where distortion is low (typically, the central areas of the image), while much more stars are used where distortion is high (typically, the corners of the image). This allows us to work with large input sets of stars to generate very accurate and efficient plate solving models. The new limit has been set to 25,000 stars, which is sufficient for all of our present practical purposes, although it could be increased with the current implementation, if necessary. Previously the limit was 5000 stars as a result of the above-mentioned time complexity issues.

Next to this introduction, I am going to show some demonstrative comparisons for evaluation of the accuracy improvements that have been achieved in the current version of our plate solver script. For the practical application of these improvements, it is convenient to understand how surface simplification works and the role it plays in the plate solving task. For that purpose I'll put some illustrative examples with synthetic data, including a script that can be easily modified for experimentation. Finally, for readers with interest in development topics, I'll include also a formal description of the implemented algorithm, as well as references to the available open-source implementation.

Plate Solving Examples

1. M81/M82 Wide-Field CCD Image


Click on the image for a full-size annotated version.

This is an integrated CCD image acquired through a Finger Lakes ProLine PL16801 CCD camera with a focal length of 500 mm open at F/2.8, used here by courtesy of the Astronomical Observatory of the University of Valencia (OAUV). As is typical for these wide-field images, this one shows relatively strong field distortions on its corners. We are going to use this image as a first test to show the accuracy improvements that can be expected from our new plate solving routines.

On the following figures we show comparisons of results achieved with three different astrometric solutions:

  • The latest (as of writing this document) version 5.3 of our ImageSolver script, with distortion correction enabled, using thin plate splines with surface simplifiers enabled.
  • The previous version 5.1.1 of our ImageSolver script, with distortion correction enabled. This version uses thin plate splines without surface simplification.
  • The new version 5.3 of our ImageSolver script, with distortion correction disabled. This generates a standard WCS linear astrometric solution.

In all cases the image has been solved with reference data from the Gaia DR2 catalog, with proper motions applied at the mean epoch of observation (2012-03-15), and a limit magnitude of 16.

The new distortion correction algorithm provides more important improvements besides a better distortion correction at the corners of image: thanks to the use of surface simplifiers, we have achieved unprecedented accuracy in our astrometric solutions. The following figures show a cross mark centered at catalog star coordinates, rendered by the AnnotateImage script with subpixel accuracy, and a PSF annotation generated by the DynamicPSF tool for each measured star. We provide mouse over comparisons and a table of ICRS equatorial coordinates computed for the star's centroid by each astrometric solution.

Astrometric Solution Right Ascension
( h m s )
Declination
( °  '  " )
Δα×cosδ
( " )
Δδ
( " )
Gaia DR2 10 10 33.876 +68 48 04.32
ImageSolver 5.3 10 10 33.885 +68 48 04.38 +0.05 +0.06
ImageSolver 5.1.1 10 10 33.741 +68 48 04.80 -0.73 +0.48
WCS Linear 10 10 32.986 +68 48 07.75 -4.83 +3.43

Centroid image coordinates:
x=3125.05 y=1290.47

Mouse over:


Astrometric Solution Right Ascension
( h m s )
Declination
( °  '  " )
Δα×cosδ
( " )
Δδ
( " )
Gaia DR2 10 12 39.241 +70 06 30.83
ImageSolver 5.3 10 12 39.229 +70 06 30.79 -0.06 -0.04
ImageSolver 5.1.1 10 12 39.053 +70 06 30.63 -0.96 -0.20
WCS Linear 10 12 38.077 +70 06 28.63 -5.94 -2.20

Centroid image coordinates:
x=3214.44 y=2584.84

Mouse over:

We have achieved an accuracy better than 0.1 arcseconds consistently throughout the entire image, as shown on the preceding figures. We must point out that this image has a resolution of 3.658 arcseconds per pixel, so we have reduced uncertainty in coordinate evaluation to less than 0.03 pixels.

These pictures show the stars used by the thin plate spline astrometric solution of the M81/M82 wide-field image, after applying our surface simplification algorithm, respectively for transformation of X-axis and Y-axis image coordinates to native projection coordinates. The input data set consists of 6986 Gaia DR2 stars up to magnitude 17. The simplified data sets have 974 and 969 stars respectively, which have been plotted on the maps shown above. Surface simplification has allowed us to use a large set of stars to compute an accurate astrometric solution, with an 86% reduction in the set of control points used to evaluate the geometric transformations.


2. V2246 Cygni Medium-Field CCD Image


Click on the image for a full-size annotated version.

This image was acquired with the same CCD camera as the preceding image (F-L ProLine PL16801), but this time using a larger telescope: a PlaneWave 20-inch CDK optical tube assembly, with a focal length of 3454 mm and an aperture of 508 mm (F/6.8). The image, also used by courtesy of OAUV, covers the region around V2246 Cyg, the optical counterpart of EXO 2030+375, a well-known transient Be X-ray binary that was discovered during a giant outburst in 1985. We'll use this image as a typical example of plate solving for medium fields.

We have solved this image against Gaia DR2 astrometric data, with proper motions applied at the mean epoch of observation (2012-09-21), and a limit magnitude of 18. The resolution is 0.541 arcseconds per pixel. As before, we'll show comparisons of astrometric solutions generated with the latest version 5.3 of our ImageSolver script, its previous 5.1.1 version, and a standard WCS linear solution. See the preceding example for more details on these comparisons.

Astrometric Solution Right Ascension
( h m s )
Declination
( °  '  " )
Δα×cosδ
( " )
Δδ
( " )
Gaia DR2 20 31 15.488 +37 52 13.65
ImageSolver 5.3 20 31 15.489 +37 52 13.65 +0.01 +0.00
ImageSolver 5.1.1 20 31 15.502 +37 52 13.52 +0.17 -0.13
WCS Linear 20 31 15.505 +37 52 13.52 +0.20 -0.13

Centroid image coordinates:
x=731.36 y=858.76

Mouse over:


Astrometric Solution Right Ascension
( h m s )
Declination
( °  '  " )
Δα×cosδ
( " )
Δδ
( " )
Gaia DR2 20 33 23.447 +37 57 38.47
ImageSolver 5.3 20 33 23.448 +37 57 38.45 +0.01 -0.02
ImageSolver 5.1.1 20 33 23.443 +37 57 38.34 -0.05 -0.13
WCS Linear 20 33 23.529 +37 57 39.44 +0.97 +0.97

Centroid image coordinates:
x=294.53 y=3688.74

Mouse over:

As a result of their increased accuracy, our improved thin plate spline astrometric solutions are now able to model, and hence correct for automatically, differential atmospheric refraction. In our opinion, this is one of the factors that contribute to the quality of our new solutions, especially the uniformity in the distribution of residual errors throughout the entire image.

The stars used by the thin plate spline astrometric solution of the V2246 Cygni image, after applying our surface simplification algorithm, respectively for transformation of X-axis and Y-axis image coordinates to native projection coordinates. The input data set consists of 3564 Gaia DR2 stars up to magnitude 18. The simplified data sets have 1614 and 1517 stars respectively, which have been plotted on the maps shown above. Surface simplification has allowed us to use a large set of stars to compute an accurate astrometric solution, with a 55% reduction in the set of control points used to evaluate the geometric transformations.


3. Locating Asteroids and Planets

Since core version 1.8.6, PixInsight integrates state-of-the-art solar system ephemerides in all standard distributions, with implementations in our C++ and JavaScript development frameworks. This includes fundamental ephemerides from JPL, IMCCE and other sources, and current IAU precession-nutation theories, among other fundamental resources. As of writing this document, PixInsight 1.8.6 provides JPL's DE/LE 438 planetary and lunar ephemerides, JPL ephemerides for the 343 most massive asteroids, and the IAU 2006/2000A precession-nutation theory. Along with ephemerides, our development frameworks include rigorous implementations (at the sub-milliarcsecond level) of algorithms for reduction of positions of solar system bodies and stars.

The new version 2.0 of the AnnotateImage script makes use of these resources to implement annotation of planets and asteroids. I'll present here two examples of solar system annotations under quite adverse conditions, which will be good tests for the accuracy of our new plate solving and ephemeris calculation routines.

For the first test I have used an image by Manolis Petrakis, who has graciously provided it for this tutorial. It is a single DSLR raw frame acquired with a Canon 6D camera and a 128 mm F/2 lens. This is a wide-field image with resolution of 10.4 arcseconds per pixel and large field distortions, which I have solved astrometrically with Gaia DR2 data up to magnitude 11.5. The image includes the asteroid 3 Juno at the date of acquisition.


Click on the image for a full-size version.

The screenshot shows an overlay generated by the AnnotateImage script, enabled as a mask on the image. The DynamicPSF tool is being used to read the ICRS equatorial coordinates for the measured centroid. To generate the annotated overlay, the AnnotateImage takes into account the geodetic coordinates of the observer and the precise observation time (for privacy reasons, I cannot disclose these data for the examples included in this document).

3 Juno - ICRS Topocentric Coordinates Right Ascension
( h m s )
Declination
( °  '  " )
Δα×cosδ
( " )
Δδ
( " )
DE430/DE438 3 38 39.052 -4 24 23.37
ImageSolver 5.3 3 38 38.965 -4 24 23.60 -1.30 +0.23

AnnotateImage computes geometric topocentric or geocentric planetary and asteroid coordinates. This means that the computed coordinates include the light-travel time correction for each body at the instant of observation, but exclude the rest of position reduction corrections: relativistic light deflection, light aberration, frame bias, precession, and nutation. This ensures coherence with the ICRS coordinate system, where astrometric solutions are defined.

Considering the resolution of the image (10.402 arcseconds), its noise and distortions, the achieved accuracy is quite remarkable in my opinion. A very interesting test is to compare the results of computing topocentric and geocentric asteroid coordinates, that is, the difference between taking into account or ignoring the position of the observer on the Earth's surface. I have made this comparison in the figure below.

3 Juno - ICRS Coordinates Right Ascension
( h m s )
Declination
( °  '  " )
DE430/DE438 Topocentric 3 38 39.052 -4 24 23.37
DE430/DE438 Geocentric 3 38 38.796 -4 24 18.31

Mouse over:

The difference shown on the figure above corresponds basically to the Earth's radius as seen at the distance of 3 Juno: 1.09 au at the instant of observation. Not bad for a single DSLR raw frame and a 128 mm lens!

Our second example is a single CCD frame acquired with a 135 mm lens open at F/4 and an ASI294 camera, courtesy of Rick Stevenson. The image covers a large region of the Milky Way around M8 and M20, which includes several solar system objects at the date of observation: Saturn and the asteroids 113 Amalthea, 213 Lilaea, 268 Adorea, and 77 Frigga. I have solved this frame astrometrically with Tycho-2 data up to magnitude 11.


Click on the image for a full-size annotated version.

Mouse over:

Only asteroids 113 Amalthea and 213 Lilaea, with visual magnituides of 13.31 and 13.38 respectively, can be reasonably measured on this frame, although at the limit of detection. The image of Saturn is a saturated flat disk, which prevents valid PSF measurements. Here are the topocentric coordinates of 113 Amalthea measured with the DynamicPSF tool on the plate-solved image:

113 Amalthea - ICRS Topocentric Coordinates Right Ascension
( h m s )
Declination
( °  '  " )
Δα×cosδ
( " )
Δδ
( " )
DE430/DE438 17 51 04.840 -22 27 28.45
ImageSolver 5.3 17 51 04.986 -22 27 29.17 +2.02 +0.72

For a resolution of 7.34 arcseconds per pixel and an object at the limit of detection, where PSF measurements are necessarily inaccurate, I would say this is a good result. Also take into account that the image has been solved with Tycho-2 data, since unfortunately, no Vizier server was able to provide Gaia DR2 data for this region. With less accurate star positions and proper motions from Tycho-2, the generated astrometric solution is also somewhat less accurate.

As in the preceding example, here is a comparison of the topocentric and geocentric annotations. The geocentric distance was 2.335 au at the time of observation.

113 Amalthea - ICRS Coordinates Right Ascension
( h m s )
Declination
( °  '  " )
DE430/DE438 Topocentric 17 51 04.840 -22 27 28.45
DE430/DE438 Geocentric 17 51 05.025 -22 27 29.24

Mouse over:


Practical Considerations

With the new version of our ImageSolver script, you may have to change the way you solve your images astrometrically in order to achieve the best possible results. Here are a few guidelines to help you with this task:

  • Enable distortion correction on a regular basis, even for medium and narrow-field frames, unless you are sure that your images have zero distortion and you don't care about differential atmospheric refraction. With the new surface simplification algorithm, an astrometric solution using thin plate splines will be reduced automatically to just a few stars when there are no significant distortions, so WCS linear solutions represent no practical advantage besides a faster initial computation.
  • When possible, use the Gaia DR2 catalog to solve your images. With the new distortion correction algorithm, this is the only catalog able to provide really accurate positions and proper motions, especially for medium and narrow fields.
  • The automatic limit magnitude feature of the ImageSolver script can be used as a first approximation, but usually you'll need more stars to model field distortions accurately. Try to increase the limit magnitude parameter until the solution uses enough stars to cover the entire field, including the four corners of the image.
  • The solver needs to know the actual date the image was acquired to compute proper motions of stars. The new algorithm demands more accurate proper motions and hence the actual acquisition time is more important. If this is not available from existing metadata (such as FITS header keywords), make sure you enter the correct acquisition date manually.
  • Make use of control images generated by the ImageSolver script. They are very important to evaluate the quality and suitability of your astrometric solutions. For example, enable the Show stars and/or Generate residuals image options to know which stars are being detected and used. Enable the Show distortion map option to evaluate the quality of the generated distortion model. Enable the Show simplified surfaces option to know which stars are being used to model distortions and how are they being distributed.
  • To locate planets and asteroids with the AnnotateImage script, you need the precise acquisition time in the UTC timescale—accurate to within one minute, or better to within one second if you are locating nearby asteroids—, as well as the precise geodetic coordinates of the observation location, except for the outer planets when the topocentric parallax is negligible, which only happens with relatively short focal lengths. If these items are available from existing metadata (e.g., FITS header keywords), the script will gather them automatically; otherwise you'll have to provide them manually.
  • For solar system annotations on images acquired with digital cameras, be aware that camera timestamps stored in raw metadata usually represent local time by default, and there is no way to know actual offsets from UTC. For solar system ephemeris calculations we always need acquisition times in the UTC timescale, so you may need to correct manually time values read from image metadata. If possible, the best solution to these issues is configuring your camera to write UTC timestamps in raw files, instead of local time.

Known Issues and Planned Solutions

One of the main practical problems we currently have to compute accurate astrometric solutions is our dependency on online catalogs. The ImageSolver script downloads astrometric data, mainly star coordinates, proper motions and magnitudes, from astronomical catalogs available online through VizieR services. Although a clear benefit of this system is that we don't have to have astronomical catalogs stored as huge local files, this dependency can be problematic, not just because it requires an active Internet connection to work—with the associated connectivity and availability issues—, but chiefly because data availability from VizieR servers has important restrictions. These restrictions are particularly problematic for the Gaia DR2 catalog, and very especially for wide and medium fields, where current VizieR services often cannot provide the large amounts of data that we need, forcing us to use reduced limit magnitudes that may degrade the quality of our results.

Fortunately, the solution to this problem exists and is obvious: to have the required astronomical catalogs available as local files. We are working to generate an optimized version of the Gaia DR2 catalog, which will be released publicly when completed. This is a huge and complex work, but definitely necessary.

For extremely wide fields, for example images acquired with focal lengths in the range from 50 to 20 mm, the ImageSolver script still has problems that we must address as soon as possible. Basically, the problem is the long computation times required to identify stars under very large field distortions, typically on the corners of the image. The culprit here is the StarAlignment process, which needs a new local distortion modeling algorithm to be more efficient and capable. We have a new algorithm already designed and ready for implementation, time and priorities permitting, which will improve significantly on these extreme cases.

For annotation and localization of solar system bodies, currently we are limited to the planets and the 343 most massive asteroids for which we already have JPL ephemeris data. It is evident that we should be able to locate and annotate more objects, including comets and much more asteroids. To solve this problem we need an appropriate numerical integration routine. This routine will integrate the perturbed orbital motion of a solar system body to generate an ephemerides file, which will work just as DE/LE ephemerides work now. Once this algorithm is implemented as a new tool, the user will have the possibility to locate and plot essentially any moving object for which a valid set of initial conditions is available, such as orbital elements, or position and velocity vectors.

Surface Simplification Examples

As noted in the introduction, to understand the changes we have made to our astrometry tools, an informed user should get a good grasp of the way surface simplification algorithms work in practice. A few examples with synthetic data sets can help to achieve this goal. The examples in this section have been produced with the following script executed in the PixInsight JavaScript runtime.

/*
 * Test script for the shape-preserving surface simplification algorithm.
 * Requires PixInsight >= 1.8.6.1471
 */

#include <pjsr/UndoFlag.jsh>

#define NOF_DATA_POINTS 100000

function RenderPoints( X, Y, width, height )
{
   let bmp = new Bitmap( width, height );
   bmp.fill( 0x00000000 );
   let g = new VectorGraphics( bmp );
   g.pen = new Pen( 0xffff0000 );
   g.antialiasing = true;
   for ( let i = 0, n = X.length; i < n; ++i )
      g.drawPoint( X.at( i ), Y.at( i ) );
   g.end();
   return bmp;
}

function SurfaceTest( image, N )
{
   let W = image.width;
   let H = image.height;

   /*
    * Interpolate random data points.
    */
   let X = new Vector( 0, N );
   let Y = new Vector( 0, N );
   let Z = new Vector( 0, N );
   for ( let i = 0; i < N; ++i )
   {
      let x = Math.random()*W;
      let y = Math.random()*H;
      X.at( i, x );
      Y.at( i, y );
      Z.at( i, image.interpolate( x, y ) );
   }

   /*
    * Simplify the data set.
    */
   let T = new ElapsedTime;
   let S = new SurfaceSimplifier( 0.001/*tolerance*/ );
   S.centroidInclusionEnabled = true;
   S.rejectionEnabled = true;
   S.rejectFraction = 0.2;
   let A = S.simplify( X, Y, Z );
   console.writeln( format( "<end><cbr>Simplify: %d -> %d  %s", X.length, A[0].length, T.text ) );
   let X1 = A[0];
   let Y1 = A[1];
   let Z1 = A[2];

   /*
    * Render the input data set.
    */
   {
      let window = new ImageWindow( W, H,
                                    3/*numberOfChannels*/,
                                    32/*bitsPerSample*/,
                                    true/*floatSample*/,
                                    true/*color*/, "original" );
      window.mainView.beginProcess( UndoFlag_NoSwapFile );
      window.mainView.image.blend( RenderPoints( X, Y, W, H ) );
      window.mainView.endProcess();
      window.show();
      window.zoomToOptimalFit();
   }

   /*
    * Render the output (simplified) data set.
    */
   {
      let window = new ImageWindow( W, H,
                                    3/*numberOfChannels*/,
                                    32/*bitsPerSample*/,
                                    true/*floatSample*/,
                                    true/*color*/, "simplified" );
      window.mainView.beginProcess( UndoFlag_NoSwapFile );
      window.mainView.image.blend( RenderPoints( X1, Y1, W, H ) );
      window.mainView.endProcess();
      window.show();
      window.zoomToOptimalFit();
   }
}

var view = ImageWindow.activeWindow.currentView;
if ( view.isNull )
   throw new Error( "No active image." );

SurfaceTest( view.image, NOF_DATA_POINTS );

With its default configuration, the script generates a sample of 100,000 pixel values interpolated from the active image at random pixel coordinates. This dense input surface is simplified using our algorithm with a tolerance of 0.001 Z-axis units, and the input and output data sets are rendered as new image windows. In these renditions, red dots represent surface data points.

If you are interested in the technical details of these algorithms, or if you want to get a deeper understanding of the way they can be controlled with our current implementation, we encourage you to experiment by varying working parameters, especially the properties of the SurfaceSimplifier object created in the SurfaceTest function.

a
b
c
d

Simplification test with a surface generated by the PixelMath expression: X()*Y().

a. Three-dimensional representation of the bivariate function.

b. Image representation of the test surface.

c. Input set of 100,000 pixel samples at random image coordinates.

d. Simplified surface of 4249 points (96% reduction). With the script's working parameters (0.001 tolerance and 20% rejection), the algorithm has found a set of flat subregions, all of them tending to be equal and equally distributed, probably as a result of the smooth curvature variations.


a
b

Simplification test with a sample of two-dimensional simplex noise.

a. Image representation of the test surface.

b. Simplified surface of 23366 points (77% reduction). Note that more points are preserved on the regions of higher variation of the sampled function, i.e. where the curvature of the surface represented by the image is relatively larger.

Note: Input sets of random points will be omitted from now on.


a
b

A posterized image is a particularly interesting test case for surface simplification.

a. Image representation of the test surface.

b. Simplified surface of 23798 points (76% reduction). The simplified surface concentrates more points on the transitions between regions of constant value, as is necessary to represent jump discontinuities.


a
b

Surface simplification test with a saturated image, where a large region has been truncated to white.

a. Image representation of the test surface.

b. Simplified surface of 8914 points (91% reduction). The saturated region is flat and hence can be maximally simplified.

The Shape-Preserving Surface Simplification Algorithm

This section is intended for readers with interest in development topics. Some technical knowledge of these algorithms is always desirable, although not required for their practical application. If you are not interested in technical details about the algorithms we have designed and implemented to achieve important improvements in our astrometry scripts and tools, you can skip this section completely.

Given a finite set of three-dimensional points representing sampled values of a real bivariate function

the shape-preserving surface simplification algorithm attempts to generate a reduced set of points with equivalent geometric properties to within a prescribed maximum error, or tolerance parameter.

Succinctly, the algorithm divides the input point space recursively on the XY plane into rectangular regions using custom quadtree structures. For each region, the algorithm finds the orientation of the dominant plane through principal component analysis. The deviation of function values from the dominant plane is then evaluated for the points in the region, and if the region is considered flat to within the tolerance parameter, its points are replaced with a simplified (reduced) set of points that tends to preserve the local shape of the original function over the region. If the region is tagged as curve, it is further divided using a new quadtree recursion, until no additional simplification can be achieved. The implemented algorithm can be summarized as follows:

Given:

  • An input set of three-dimensional points .
  • An empty output set of three-dimensional points .
  • A tolerance parameter ≥ 0 in units of bivariate function values.
  • A rejection fraction parameter .

1. Build a bucket point-region quadtree[1][2] structure from the input set . In our current implementation, each quadtree recursion tends to split into four quadrants. This can be controlled by varying the bucket size.

2. Perform a tree traversal. For each tree leaf node:

2.1 Let be the set of points in the current node, and let be the cardinality of . If < 4, then set = 0 and go to step 2.7.2.

2.2 Compute the centroid

,

where .

2.3 Compute the covariance matrix of the node points with respect to the centroid:

,

where the covariance is given by

,

where , with similar expressions for the rest of matrix elements.

2.4 Compute the eigenvectors of the covariance matrix. Let be the least eigenvector of (that is, the eigenvector associated with the smallest eigenvalue):

.

2.5 The least eigenvector defines the direction normal to the dominant plane of , or in other words, the direction of least variation in , perpendicular to its principal component axis. From the plane equation we can compute the distance to the dominant plane for each :

.

2.6 Compute a robust plane deviation:

,

that is, we take the -th order statistic of the set of point-plane distances.

2.7 Evaluate recursion/simplification:

2.7.1 Recursion: If > , then set = , go to step 1.

2.7.2 Simplification: If , then add to .

In step 2.7.2, the convex hull[3] of the set of node points is used as a simplification of the rectangular region covered by the quadtree node. Intuitively, the convex hull can be seen as the best simplified representation in the sense of local shape preservation: since the region is flat, all of the points inside the convex hull are redundant for function fitting. In the current implementation, the centroid point of the region ( in step 2.2) can also be included along with the convex hull, which increases the accuracy of surface representation at the cost of a small number of additional points.

The outlier rejection performed in step 2.6 is important for the application of this algorithm to the plate solving task, since input point sets are always noisy in this case. Uncertainty originates here mainly from errors in astrometric catalogs, from PSF fitting errors resulting from image noise and spurious data, and from difficulties for star identification on crowded fields. Outlier rejection is essential in this case to achieve the robust results that we need. In our implementation we set = 0.1 by default for plate solving.

A C++ implementation of this algorithm is available in the PCL distribution. The relevant source code is available at our open-source repositories:

An equivalent JavaScript object, SurfaceSimplifier, is also available in the core PJSR. An example of its use is shown on the test script included in this document.


References

[1] Mark de Berg et al., Computational Geometry: Algorithms and Applications Third Edition, Springer, 2010, Chapter 14.

[2] Hanan Samet, Foundations of Multidimensional and Metric Data Structures, Morgan Kaufmann, 2006, § 1.4.

[3] Joseph O'Rourke, Computational Geometry in C, 2nd Revised edition, Cambridge UniversityPress, 1998, Chapter 3.