The recently released Superbias tool has given rise to some discussions in other forums, going far beyond what we would have expected for the implementation of a relatively simple—although clever—algorithm. In some cases we have seen important misunderstandings and wrong concepts about the superbias method (also about PixInsight in general, but that's another topic), a few of them requiring some clarification in our opinion.
The first misconception is that the superbias method consists of averaging bias frame columns, throwing away variations with different orientations and other structures. This is not true, and in fact I don't understand how somebody could even imagine that we can release a new tool to do such thing. Take a look at the following screenshot.
(http://forum-images.pixinsight.com/20140710/SB/sb-comparison-tn.jpg) (http://forum-images.pixinsight.com/20140710/SB/sb-comparison.jpg)
Click for full-size image (http://forum-images.pixinsight.com/20140710/SB/sb-comparison.jpg)
The screenshot shows three superbias frames. The one to the left has been generated with the default 7 layers of multiscale analysis. The superbias at the middle has been generated with 6 layers, and the one to the right with 5 layers. As you can see, the superbias frames are not just column averages: they preserve medium-scale and large-scale structures in the source master bias frame.
The superbias algorithm performs a multiscale decomposition of a master bias frame into a prescribed number of layers, following a dyadic scaling scheme to isolate structures of 1, 2, 4, ... 2n, > 2n pixels. The small-scale layers (from 1 to 2n pixels) are removed from the multiscale decomposition, and the working image (where the superbias is being generated) is reconstructed applying the inverse transform with the residual layer that isolates structures at characteristic scales larger than 2n pixels. As can be seen on the screenshot above, by using less layers we can preserve smaller structures in the generated superbias frame. For normal image calibration use, we recommend 7 or 6 multiscale layers. The current implementation uses the multiscale median transform (MMT)[1][2] by default, but the starlet transform[1][3] is also available as an option. MMT is used by default because it can isolate structures better and allows for ringing-free transformations.
Another wrong concept is that we are calculating the arithmetic mean of the pixels in each column of the source master bias. Of course we are not doing that. As you surely know, the mean is a non-robust estimator of location. A single hot pixel in the source master bias—and, in general, any abnormally bright or dark pixel—can produce a completely wrong mean value for the column where it belongs. To overcome this problem, we compute a robust trimmed mean for each column of the master bias frame. The trimmed mean rejects any outlier pixels and yields an accurate and efficient significant value for each superbias column. By default, a 20% trimmed mean is used (the trimming factor is not available from the Superbias tool GUI, but can be changed with a hidden process parameter by editing the instance source code).
As for the usefulness of the superbias method, it largely depends on the data being calibrated and its quality. If you routinely make master bias frames from 500 or more frames, you already work with high-quality bias data, so a superbias will be of little benefit for you. If you just have 20 or 50 bias frames, a superbias will improve the performance of our dark scaling algorithm significantly. Even if the repercussion of a superbias is very little or marginal, it costs virtually nothing (just one click!) to generate a noise-free calibration frame that is going to be be applied to a whole data set. Besides image calibration, we are exploring how the superbias algorithm (and variations thereof) can be applied to perform other image analysis tasks.
[1] Starck, J.-L., Murtagh, F. and J. Fadili, A. (2010), Sparse Image and Signal Processing: Wavelets, Curvelets, Morphological Diversity, Cambridge University Press.
[2] Barth, Timothy J., Chan, Tony, Haimes, Robert (Eds.) (2002), Multiscale and Multiresolution Methods: Theory and Applications, Springer. Invited paper: Jean-Luc Starck, Nonlinear Multiscale Transforms, pp. 239-279.
[3] Jean-Luc Starck, Fionn Murtagh, Mario Bertero, Handbook of Mathematical Methods in Imaging, ch. 34, Starlet Transform in Astronomical Data Processing, Springer, 2011, pp. 1489-1531.
I am just wondering why you recommend 7 or 6 layers of multiscale analysis, which seems to remove almost all those structures.
When working with medium-scale structures, one has always to be careful to ensure that structures are real. If the source master bias is very noisy, some medium-scale structures isolated at scales above 8-16 pixels can be false structures generated by accidental grouping of noise pixels at smaller scales. These groupings may pervade the multiscale representation at higher scales. This is one of the reasons why we are using the MMT instead of other transforms such as starlet or the pyramidal wavelet transform in this case: The MMT is much better at isolating structures within a limited set of layers.
In the example above the master bias has been generated from 40 bias frames. I would say that this is a rather typical case for most PixInsight users. The master is still quite noisy as you can see, and I personally wouldn't trust all the structures isolated with 5 multiscale layers. The superbias made with 6 layers is the best option in this case IMO. The tool's default is 7 layers to ensure that reliable results can be obtained with poor quality masters made from 20 bias frames or even less.
One way to assess medium-scale superbias structures is to build several superbias frames with different masters and compare the results. For example, in your case you can make two master biases with 50 and 100 bias frames, then you can generate superbias frames from both with the same parameters. If the medium-scale structures are clearly present in both cases, then they are reliable.
Another way to test a superbias is by subtracting the superbias from its source master bias. The residual should be random noise exclusively, modulo defective pixels.
(http://forum-images.pixinsight.com/20140711/SB/sb-residual-1-tn.jpg) (http://forum-images.pixinsight.com/20140711/SB/sb-residual-1.jpg)
Click for full-size image (http://forum-images.pixinsight.com/20140711/SB/sb-residual-1.jpg)
In the above screenshot, I have generated a superbias from a 40-frame master bias using 6 multiscale layers. Then I have used PixelMath to compute the absolute value of the difference between the superbias and the master bias. The result is pure random noise without any significant structure. This latter assertion has to be supported by evidence, though. In the following screenshot:
(http://forum-images.pixinsight.com/20140711/SB/sb-residual-2-tn.jpg) (http://forum-images.pixinsight.com/20140711/SB/sb-residual-2.jpg)
Click for full-size image (http://forum-images.pixinsight.com/20140711/SB/sb-residual-2.jpg)
I have applied the same Superbias process to the residual. As expected, the result is a constant image without any significant structure. This result has the following statistical properties:
superbias1
K
count (%) 100.0000
count (px) 16777216
mean 1.075061e-05
median 1.073933e-05
stdDev 3.106120e-07
avgDev 2.840942e-07
MAD 2.715307e-07
minimum 8.162066e-06
maximum 1.505406e-05
As you can see, the dispersion of this image is as low as about twice the machine epsilon for the IEE 754 32-bit floating point format.