Superbias: Some Clarification

Juan Conejero

PixInsight Staff
Staff member
The recently released Superbias tool has given rise to some discussions in other forums, going far beyond what we would have expected for the implementation of a relatively simple—although clever—algorithm. In some cases we have seen important misunderstandings and wrong concepts about the superbias method (also about PixInsight in general, but that's another topic), a few of them requiring some clarification in our opinion.

The first misconception is that the superbias method consists of averaging bias frame columns, throwing away variations with different orientations and other structures. This is not true, and in fact I don't understand how somebody could even imagine that we can release a new tool to do such thing. Take a look at the following screenshot.


The screenshot shows three superbias frames. The one to the left has been generated with the default 7 layers of multiscale analysis. The superbias at the middle has been generated with 6 layers, and the one to the right with 5 layers. As you can see, the superbias frames are not just column averages: they preserve medium-scale and large-scale structures in the source master bias frame.

The superbias algorithm performs a multiscale decomposition of a master bias frame into a prescribed number of layers, following a dyadic scaling scheme to isolate structures of 1, 2, 4, ... 2n, > 2n pixels. The small-scale layers (from 1 to 2n pixels) are removed from the multiscale decomposition, and the working image (where the superbias is being generated) is reconstructed applying the inverse transform with the residual layer that isolates structures at characteristic scales larger than 2n pixels. As can be seen on the screenshot above, by using less layers we can preserve smaller structures in the generated superbias frame. For normal image calibration use, we recommend 7 or 6 multiscale layers. The current implementation uses the multiscale median transform (MMT)[1][2] by default, but the starlet transform[1][3] is also available as an option. MMT is used by default because it can isolate structures better and allows for ringing-free transformations.

Another wrong concept is that we are calculating the arithmetic mean of the pixels in each column of the source master bias. Of course we are not doing that. As you surely know, the mean is a non-robust estimator of location. A single hot pixel in the source master bias—and, in general, any abnormally bright or dark pixel—can produce a completely wrong mean value for the column where it belongs. To overcome this problem, we compute a robust trimmed mean for each column of the master bias frame. The trimmed mean rejects any outlier pixels and yields an accurate and efficient significant value for each superbias column. By default, a 20% trimmed mean is used (the trimming factor is not available from the Superbias tool GUI, but can be changed with a hidden process parameter by editing the instance source code).

As for the usefulness of the superbias method, it largely depends on the data being calibrated and its quality. If you routinely make master bias frames from 500 or more frames, you already work with high-quality bias data, so a superbias will be of little benefit for you. If you just have 20 or 50 bias frames, a superbias will improve the performance of our dark scaling algorithm significantly. Even if the repercussion of a superbias is very little or marginal, it costs virtually nothing (just one click!) to generate a noise-free calibration frame that is going to be be applied to a whole data set. Besides image calibration, we are exploring how the superbias algorithm (and variations thereof) can be applied to perform other image analysis tasks.


[1] Starck, J.-L., Murtagh, F. and J. Fadili, A. (2010), Sparse Image and Signal Processing: Wavelets, Curvelets, Morphological Diversity, Cambridge University Press.

[2] Barth, Timothy J., Chan, Tony, Haimes, Robert (Eds.) (2002), Multiscale and Multiresolution Methods: Theory and Applications, Springer. Invited paper: Jean-Luc Starck, Nonlinear Multiscale Transforms, pp. 239-279.

[3] Jean-Luc Starck, Fionn Murtagh, Mario Bertero, Handbook of Mathematical Methods in Imaging, ch. 34, Starlet Transform in Astronomical Data Processing, Springer, 2011, pp. 1489-1531.
 
Juan,

Clearly there are larger scale structures in the master bias, which are very obvious on your example above with 5 layers of multiscale analysis. Should these structures not be retained in the master superbias so that those structures can be removed from the data during calibration?

I am just wondering why you recommend 7 or 6 layers of multiscale analysis, which seems to remove almost all those structures. Or perhaps I misunderstand what these structures are.

Your explanation goes a long way to explain why when I tried the Superbias I did not seen any difference, that?s because I use 100 bias subs which is producing a reasonably good quality bias anyway.

Thank you,

Mike
 
I am just wondering why you recommend 7 or 6 layers of multiscale analysis, which seems to remove almost all those structures.

When working with medium-scale structures, one has always to be careful to ensure that structures are real. If the source master bias is very noisy, some medium-scale structures isolated at scales above 8-16 pixels can be false structures generated by accidental grouping of noise pixels at smaller scales. These groupings may pervade the multiscale representation at higher scales. This is one of the reasons why we are using the MMT instead of other transforms such as starlet or the pyramidal wavelet transform in this case: The MMT is much better at isolating structures within a limited set of layers.

In the example above the master bias has been generated from 40 bias frames. I would say that this is a rather typical case for most PixInsight users. The master is still quite noisy as you can see, and I personally wouldn't trust all the structures isolated with 5 multiscale layers. The superbias made with 6 layers is the best option in this case IMO. The tool's default is 7 layers to ensure that reliable results can be obtained with poor quality masters made from 20 bias frames or even less.

One way to assess medium-scale superbias structures is to build several superbias frames with different masters and compare the results. For example, in your case you can make two master biases with 50 and 100 bias frames, then you can generate superbias frames from both with the same parameters. If the medium-scale structures are clearly present in both cases, then they are reliable.

Another way to test a superbias is by subtracting the superbias from its source master bias. The residual should be random noise exclusively, modulo defective pixels.


In the above screenshot, I have generated a superbias from a 40-frame master bias using 6 multiscale layers. Then I have used PixelMath to compute the absolute value of the difference between the superbias and the master bias. The result is pure random noise without any significant structure. This latter assertion has to be supported by evidence, though. In the following screenshot:


I have applied the same Superbias process to the residual. As expected, the result is a constant image without any significant structure. This result has the following statistical properties:

Code:
superbias1
            K
count (%)   100.0000
count (px)  16777216
mean        1.075061e-05
median      1.073933e-05
stdDev      3.106120e-07
avgDev      2.840942e-07
MAD         2.715307e-07
minimum     8.162066e-06
maximum     1.505406e-05

As you can see, the dispersion of this image is as low as about twice the machine epsilon for the IEE 754 32-bit floating point format.
 
Juan

I tried your super bias tool, it seems to work very well.

Thanks for all the hard work in making my images look better.

Regards

Julian
 
Juan,

Thank you for the detailed explanation, I have repeated the above workflow, but I get a very different result. I am just wondering if at the last stage where you display an even grey frame, that you did not turn on 24bit STF ?

My example below shows the superbais made from 30 bias subs, then the residual which shows what looks like random noise, then the superbias is applied to that residual giving the even grey frame, but if I make a clone of that and apply 24bit STF the result is a medium scale pattern which I have determind is noise as I don't get a repeatable result with superbias's with different number of subs.

The stats for that final image is:

superbias4_clone
K
count (%)  100.0000
count (px)  9188256
mean          4.385633e-005
median      4.386843e-005
avgDev      7.332499e-007
MAD            6.168520e-007
minimum    3.940578e-005
maximum    4.848888e-005

Mike
 

Attachments

  • superbias.jpg
    superbias.jpg
    271.7 KB · Views: 136
Mike,

You are absolutely right, I got carried away by the numbers and didn't pay attention to the visual representation. Sorry if it has seemed that I've tried to hide something.

The conclusions of my test with the superbias residual are still 100% correct. As you can see in the following video (please watch it in 1080p HD quality if possible):

http://youtu.be/BfGuKqSb1BA

the maximum differences between neighbor pixels of this residual are of about 5x10-7. With 24-bit STF visualization, the residual shows a medium scale pattern and a vertical pattern that does not match the master bias frame. With a dispersion of 3x10-7, this image is a negligible residual with no significant structures.
 
Juan,

Juan Conejero said:
You are absolutely right, I got carried away by the numbers and didn't pay attention to the visual representation. Sorry if it has seemed that I've tried to hide something.

I did not think that at all, I am very glad I spotted the oversight as it did not match my findings. Thank you also for the very high quality video, I wish I knew how to make such a video.

Thanks,

Mike
 
Hey Guy's,

Sorry to rekindle this thread after so long but I need to ask a question if you don't mind.

With ref to the attached image in Juan's very first post which shows 3 superbias at 5, 6 and 7 layers. Now im not going to get the scientific stuff here so i'll ask a simple question based on the 3 example superbias images there in the pic.

Which one of the three would you be taking forward to continue you preprocessing, the one showing the most 'blobs' (5 layers), the one with some blobs (6 Layers) or the one showing no blobs (7 Layers)    ????

Generally I take 31 Bias frames, integrate them and apply superbias to the result, attached is a pic with the Layers set between 5 through to 8.....

Would I be right to select the 7 Layered Superbias to take forward (im selecting the one which shows the least amount of blobs but 8 Layers would be going too far the other way  ???

Thanks
Paul


 

Attachments

  • FFF.jpg
    FFF.jpg
    148.1 KB · Views: 130
Paul,

What I would do is take more bias frames, say 124 (which is 4 x 31) Make a super bias out of each set of 31 and compare the results. If the 'blobs' appear to be the same in all the results, then you can be fairly sure it's a real bias pattern. If they are different, they are noise and you should use more layers to make your final Superbias.

Taking bias frames is quick so it won't take long to get the optimal settings.

Mike
 
When working with medium-scale structures, one has always to be careful to ensure that structures are real. If the source master bias is very noisy, some medium-scale structures isolated at scales above 8-16 pixels can be false structures generated by accidental grouping of noise pixels at smaller scales. These groupings may pervade the multiscale representation at higher scales. This is one of the reasons why we are using the MMT instead of other transforms such as starlet or the pyramidal wavelet transform in this case: The MMT is much better at isolating structures within a limited set of layers.

In the example above the master bias has been generated from 40 bias frames. I would say that this is a rather typical case for most PixInsight users. The master is still quite noisy as you can see, and I personally wouldn't trust all the structures isolated with 5 multiscale layers. The superbias made with 6 layers is the best option in this case IMO. The tool's default is 7 layers to ensure that reliable results can be obtained with poor quality masters made from 20 bias frames or even less.

One way to assess medium-scale superbias structures is to build several superbias frames with different masters and compare the results. For example, in your case you can make two master biases with 50 and 100 bias frames, then you can generate superbias frames from both with the same parameters. If the medium-scale structures are clearly present in both cases, then they are reliable.

Another way to test a superbias is by subtracting the superbias from its source master bias. The residual should be random noise exclusively, modulo defective pixels.


In the above screenshot, I have generated a superbias from a 40-frame master bias using 6 multiscale layers. Then I have used PixelMath to compute the absolute value of the difference between the superbias and the master bias. The result is pure random noise without any significant structure. This latter assertion has to be supported by evidence, though. In the following screenshot:


I have applied the same Superbias process to the residual. As expected, the result is a constant image without any significant structure. This result has the following statistical properties:

Code:
superbias1
            K
count (%)   100.0000
count (px)  16777216
mean        1.075061e-05
median      1.073933e-05
stdDev      3.106120e-07
avgDev      2.840942e-07
MAD         2.715307e-07
minimum     8.162066e-06
maximum     1.505406e-05

As you can see, the dispersion of this image is as low as about twice the machine epsilon for the IEE 754 32-bit floating point format.


I've been trying to do some research about the superbias and came across this very interesting thread, I'm quite new to Pixinsight, but I'm a quick learner, I've followed the process explained by you, and I'm wondering if it's safe to say that I should use the lowest number of multiscale layers that gives me no structure when I apply the same settings to the residual?

I'm confused about the statistics of the image and the epsilon and that part, I'm not that comfortable with that, and mine has a quite different relation between values.

2020-06-11 15 23 24.png


Here the results of 5,4,3 multiscale layers, 4 it's clean, and if I try with 3 and do the same process when I apply 3 again to the residual noise, it gives me a few small structures, should I then just go with the 4 no matter the value on the statistics?
 
Back
Top