The recently released Superbias tool has given rise to some discussions in other forums, going far beyond what we would have expected for the implementation of a relatively simple—although clever—algorithm. In some cases we have seen important misunderstandings and wrong concepts about the superbias method (also about PixInsight in general, but that's another topic), a few of them requiring some clarification in our opinion.

The first misconception is that the superbias method consists of averaging bias frame columns, throwing away variations with different orientations and other structures. This is not true, and in fact I don't understand how somebody could even imagine that we can release a new tool to do such thing. Take a look at the following screenshot.

The screenshot shows three superbias frames. The one to the left has been generated with the default 7 layers of multiscale analysis. The superbias at the middle has been generated with 6 layers, and the one to the right with 5 layers. As you can see, the superbias frames are not just column averages: they preserve medium-scale and large-scale structures in the source master bias frame.

The superbias algorithm performs a multiscale decomposition of a master bias frame into a prescribed number of layers, following a dyadic scaling scheme to isolate structures of 1, 2, 4, ... 2

^{n}, > 2

^{n} pixels. The small-scale layers (from 1 to 2

^{n} pixels) are removed from the multiscale decomposition, and the working image (where the superbias is being generated) is reconstructed applying the inverse transform with the residual layer that isolates structures at characteristic scales larger than 2

^{n} pixels. As can be seen on the screenshot above, by using less layers we can preserve smaller structures in the generated superbias frame. For normal image calibration use, we recommend 7 or 6 multiscale layers. The current implementation uses the multiscale median transform (MMT)

^{[1][2]} by default, but the starlet transform

^{[1]}^{[3]} is also available as an option. MMT is used by default because it can isolate structures better and allows for ringing-free transformations.

Another wrong concept is that we are calculating the arithmetic mean of the pixels in each column of the source master bias. Of course we are not doing that. As you surely know, the mean is a non-robust estimator of location. A single hot pixel in the source master bias—and, in general, any abnormally bright or dark pixel—can produce a completely wrong mean value for the column where it belongs. To overcome this problem, we compute a robust trimmed mean for each column of the master bias frame. The trimmed mean rejects any outlier pixels and yields an accurate and efficient significant value for each superbias column. By default, a 20% trimmed mean is used (the trimming factor is not available from the Superbias tool GUI, but can be changed with a hidden process parameter by editing the instance source code).

As for the usefulness of the superbias method, it largely depends on the data being calibrated and its quality. If you routinely make master bias frames from 500 or more frames, you already work with high-quality bias data, so a superbias will be of little benefit for you. If you just have 20 or 50 bias frames, a superbias will improve the performance of our dark scaling algorithm significantly. Even if the repercussion of a superbias is very little or marginal, it costs virtually nothing (just one click!) to generate a noise-free calibration frame that is going to be be applied to a whole data set. Besides image calibration, we are exploring how the superbias algorithm (and variations thereof) can be applied to perform other image analysis tasks.

[1] Starck, J.-L., Murtagh, F. and J. Fadili, A. (2010),

*Sparse Image and Signal Processing: Wavelets, Curvelets, Morphological Diversity*, Cambridge University Press.

[2] Barth, Timothy J., Chan, Tony, Haimes, Robert (Eds.) (2002),

*Multiscale and Multiresolution Methods: Theory and Applications*, Springer. Invited paper: Jean-Luc Starck,

*Nonlinear Multiscale Transforms*, pp. 239-279.

[3] Jean-Luc Starck, Fionn Murtagh, Mario Bertero,

*Handbook of Mathematical Methods in Imaging*, ch. 34,

*Starlet Transform in Astronomical Data Processing*, Springer, 2011, pp. 1489-1531.