Author Topic: PixInsight 1.6.1 - StarAlignment: New Pixel Interpolation Algorithms  (Read 8687 times)

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
StarAlignment: New Pixel Interpolation Algorithms

The StarAlignment tool released with PixInsight 1.6.1 provides the full set of pixel interpolation algorithms available on the PixInsight/PCL platform, as can be seen on the screenshot below.


Automatic

This is the default interpolation mode. In this mode StarAlignment will select the following interpolation algorithms as a function of the rescaling involved in the registration geometrical transformation:

- Cubic B-spline filter interpolation when the scaling factor is below 0.25 approximately.
- Mitchell-Netravali filter interpolation for scaling factors between 0.6 and 0.25 approx.
- Bicubic spline interpolation in the rest of cases: from moderate size reductions to no rescaling or rescaling up.

Bicubic spline

This is, in general, the most accurate pixel interpolation algorithm available for image registration on the PixInsight/PCL platform. In most cases this algorithm will yield the best results in terms of preservation of original image features and accuracy of subpixel registration. When this interpolation is selected (either explicitly or automatically), a  linear clamping mechanism is used to prevent oscillations of the cubic interpolation polynomials in presence of jump discontinuities. Linear clamping is controlled with the linear clamping threshold parameter.

Bilinear interpolation

This interpolation can be useful to register low SNR linear images, in the rare cases where bicubic spline interpolation generates too strong oscillations between noisy pixels that can't be avoided completely with the linear clamping feature.

Cubic filter interpolations

These include: Mitchell-Netravali, Catmull-Rom spline, and cubic B-spline. These interpolation algorithms provide higher smoothness and subsampling accuracy that can be necessary when the registration transformation involves relatively strong size reductions.

Nearest neighbor

This is the simplest possible pixel interpolation method. It always produces the worst results, especially in terms of registration accuracy, and discontinuities due to the simplistic interpolation scheme. However, nearest neighbor preserves the original noise distribution in the registered images, in absence of scaling, a property that can be useful in some image analysis applications. Nearest neighbor is not recommended for production work mainly because it does not provide subpixel registration accuracy.
« Last Edit: 2010 August 16 03:02:19 by Juan Conejero »
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Niall Saunders

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1456
  • We have cookies? Where ?
Thanks Juan,

I assume that you will be hoping to build these little dissertations into a more 'general' type of tutorial or guide?

Cheers,
Cheers,
Niall Saunders
Clinterty Observatories
Aberdeen, UK

Altair Astro GSO 10" f/8 Ritchey Chrétien CF OTA on EQ8 mount with homebrew 3D Balance and Pier
Moonfish ED80 APO & Celestron Omni XLT 120
QHY10 CCD & QHY5L-II Colour
9mm TS-OAG and Meade DSI-IIC

Offline RBA

  • PixInsight Guru
  • ****
  • Posts: 511
    • DeepSkyColors
I know it's been a while since this was posted, but today I had a friendly discussion with a good friend about interpolation algorithms for registration, and some of the things he said may contradict what Juan stated above, particularly the Nearest neighbor always producing the worst results - BTW I did direct him to this page while we were discussing this. Maybe he chimes in later on?  :angel:

Here's some snaps of what he said:

If you have a lot of subs (30 is a lot and 10 isn't) and if you are well oversampled (3.5x or more) Nearest Neighbor will lose the least resolution. If you don't have both of the above you will can jagged artifacts with Nearest Neighbor and the stars will look funny.

And directed towards the "NN giving the worst results":

Are you judging results by how well formed the stars are or by how much you worsen the FWHM and are you dealing with large numbers of subs, small numbers, 0.5"/px image scale or 4"/px image scale? You _can_ get very well formed stars with NN if you have the data that works well with it.

With a small image scale the lack of sub pixel accuracy isn't much of a problem. If your seeing is 1.9" FHWM and your image scale is 0.5 FHWM then when you align at the pixel scale you are still aligning at smaller than the overall size of the hummock of the point spread and with a lot of subs the sub-pxel effect will sort of work out by averaging that 18% go to this px and 72% to that px (remember the dithering will mean the centroid of the star is not in the same place relative to the center of the pixel it lies in from frame to frame).


At the scale I usually work with, this is of no concern to me. I mainly do widefields lately and it's clear as water that NN would be a terrible choice for registering my images, but in the overall game of choice of interpolation algorithms, doesn't the above also make sense? If that's the case, there will be cases where NN might be the best choice, right?




Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Hi Rogelio,

Quote
With a small image scale the lack of sub pixel accuracy isn't much of a problem. ...

That seems quite reasonable after integrating a large number of images. But IMO this just justifies that NN can be used in these particular cases without practical problems, not that it is the best option. It just says that (1) when the images are sufficiently oversampled subpixel registration is not really critical in terms of actual resolution, and (2) with a large number of images the artifacts generated by NN will tend to cancel out after averaging.

However, that doesn't prove, in my opinion, that NN is better in terms of detail preservation. Bicubic spline and bilinear interpolation, for example, do provide subpixel registration, and I really can't see why subpixel accuracy may not be desirable for image registration of any kind of images.

It is true that NN is the only algorithm that preserves the original distribution of pixels, i.e. the existing noise distribution. This happens because NN actually does not interpolate; it just copies existing pixel values from one place to another. The rest of interpolation algorithms are true interpolations and as such, they involve some low-pass filtering of the data. Our implementation of bicubic spline, however, applies a special separable filter with negative lobes that compensate for the low-pass filtering effect efficiently. The original interpolation function has been described and analyzed by R. G. Keys in Cubic Convolution Interpolation for Digital Image Processing, IEEE Transactions on Acoustics, Speech & Signal Proceedings Vol. 29, pp. 1153-1160 (1981):

   f(x) = (a+2)*x^3 - (a+3)*x^2 + 1       for 0 <= x <= 1
   f(x) = a*x^3 - 5*a*x^2 + 8*a*x - 4*a   for 1 < x <= 2
   f(x) = 0                               otherwise


If you plot this function, you'll see that it has a negative lobe or valley below the X axis (two lobes, if you represent it symmetrically with respect to the origin), controlled by the constant a above. These lobes provide a small amount of high-pass filtering as a function of a (-1 <= a < 0). In our implementation, I have set a=-1/2, as recommended by the author.

The negative lobes and the fact that this is a cubic polynomial may lead to artifacts caused by oscillations at jump discontinuities, which happen frequently in linear images. That's why I modified the original algorithm to introduce a linear clamping device. Linear clamping prevents oscillations by switching to linear interpolation in presence of very high differences between neighbor pixels, and acts at the individual row or column level (the filter is separable) without degrading the overall filter performance.

Of course no interpolation algorithm is perfect. When we are interpolating observational data we are always cheating to some extent. It is true that an interpolation like the above one may sometimes introduce small aliasing artifacts in the noise distribution over background areas after registration, especially in presence of rotation and high amounts of noise.  But these artifacts cancel out without problems after integrating (averaging) some frames, and they are definitely much better than the seams generated by NN. Obviously, at least IMO, the excellent subpixel accuracy that these interpolations provide compensates for the small aliasing problems without doubt.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline RBA

  • PixInsight Guru
  • ****
  • Posts: 511
    • DeepSkyColors
Thanks for the response, Juan. As usual, you provided a wealth of information that greatly surpasses what I was expecting!