PixInsight's Flat field correction with data of OSC cameras

bulrichl

Well-known member
There was a thread about "Preparing color-balanced OSC flats in PixInsight" https://www.cloudynights.com/topic/667524-preparing-color-balanced-osc-flats-in-pixinsight/
at the Cloudy Nights forum some time ago. Unfortunately no consensus was found and the thread came to nothing. Also the original poster did not answer my private messages lately. So I think that this topic is appropriate to be discussed here.

The main questions that were raised in the cited thread were:
1) How can we judge a mosaiced MasterFlat? (e.g. comparison of uncalibrated light frames with calibrated ones and with the MasterFlat)
2) Why does flat field correction of OSC data cause a very strong color cast?
3) How can the strong color cast be avoided?
4) Does the strong color cast after normal flat field correction of OSC data degrade the SNR of the images?


Appended is a typical MasterFlat of my ASI294MC Pro (gain 120, offset 30, -10 ?C, flat field box undimmed, 40 flat frames, exposure time 0.003 s; the flat frames were calibrated with a MasterFlatDark and integrated) and its histogram.

The bayer/mosaic pattern of the camera is RGGB, PixInsight's assignmnet of the CFA channels is:
Code:
0  2    R  G
1  3    G  B


1) Judgement of the Flat field correction / the MasterFlat
The MasterFlat cannot be judged from this representation. The reason is that the bayered representation does not show any dust donuts due to the large intensity difference of CFA1, CFA2 (green, strong) and CFA0, CFA3 (red and blue, weak). The application of a STF Auto Stretch worsens the situation, the result beeing dimmer and not more contrasty.

In the debayered MasterFlat dust donuts can be made out, but the image has a very strong green color cast even if the 'Link RGB channels' option of STF is disabled.

In order to see the dust donuts one has to split the data into CFA channels with SplitCFA. The separated channels can be judged with STF Auto Stretch. With the above MF the Statistics look as follows ('Unclipped' option of Statistics disabled):

Code:
	    MF		        MF_CFA0		MF_CFA1		MF_CFA2		MF_CFA3
count (%)   100.00000		100.00000   	100.00000   	100.00000   	100.00000
count (px)  11694368		2923592     	2923592     	2923592     	2923592
mean        27879.936		19409.632   	36101.452   	36089.172   	19919.488
median      32367.932		19356.783   	36005.927   	35992.244   	19846.627
variance    68851118.203	641136.407  	2022116.605 	2014070.630 	627450.905
stdDev      8297.657		800.710     	1422.011    	1419.180    	792.118
avgDev      10296.331		843.728     	1490.886    	1487.439    	827.184
MAD         15201.454		937.441     	1648.974    	1644.057    	910.120
minimum     17190.099		17190.099   	32367.932   	32534.088   	17866.335
maximum     39768.546		21401.955   	39748.476   	39768.546   	22114.692


2) Color cast caused by Flat field correction
Let us recall how the flat correction is performed with OSC data in PixInsight:

Cal = (LF - MD) / MF * s0

Cal: Calibrated light frame
LF:  light frame
MD:  MasterDark
MF:  MasterFlat
s0:  Master flat scaling factor = mean(MF)

For OSC data, Pixinsight uses one and the same s0 in the flat field correction for each channel. For the present MF, the overall factors for each channel are:
CFA0: s0/mean(MF_CFA0) = 1.44
CFA1: s0/mean(MF_CFA1) = 0.77
CFA2: s0/mean(MF_CFA2) = 0.77
CFA3: s0/mean(MF_CFA3) = 1.40

From these data it is evident that the flat field correction causes a very strong color cast to violet when one compares the original light frames with the calibrated ones. So whereas the uncalibrated light frames have a strong green color cast, the calibrates ones have a very strong red color cast.


3) Modified MasterFlat
Actually the flat field correction should not use one s0 for all CFA channels but the mean of the particular channel for each channel separately. One can achieve this by modifying the original MF. Note that multiplying channels of a MF with a factor is an allowed operation provided that
- no rescale is performed and
- no clipping of data occurs.

So I set up a PixelMath expression that does the modification of the MF in the desired way, i.e. multiply each CFA channel with the mean of the channel with lowest signal (in this case CFA0) divided by the mean of the particular channel:

RGB/K:  iif(x()%2==0 && y()%2==0, $T, iif(x()%2==0 && y()%2!=0, m0/m1*$T, iif(x()%2!=0 && y()%2==0, m0/m2*$T, m0/m3*$T)))

Symbols: m0=mean(MF_CFA0), m1=mean(MF_CFA1), m2=mean(MF_CFA2), m3=mean(MF_CFA3)

Important: the 'Rescale result' option must be disabled!


The statistics of the MF modified in this way are as follows:

Code:
	    MF_mod_CFA0     MF_mod_CFA1	    MF_mod_CFA2	    MF_mod_CFA3
count (%)   100.00000       100.00000       100.00000       100.00000
count (px)  2923592         2923592         2923592         2923592
mean        19409.632       19409.632       19409.632       19409.632
median      19356.783       19358.273       19357.501       19338.635
variance    641136.407      584509.319      582579.815      595741.680
stdDev      800.710         764.532         763.269         771.843
avgDev      843.728         801.562         799.981         806.011
MAD         937.441         886.555         884.216         886.825
minimum     17190.099       17402.336       17497.622       17409.031
maximum     21401.955       21370.422       21388.488       21548.648

This modified MF (as monochrome representation of the CFA data) can be used conveniently for the judgment of the flat field correction because different illumination in the channels is balanced. the debayered modidified MF does not show a strong color cast. There was, however, a slight color gradient visible: left side greenish, right side reddish. The modified MF does not cause a color cast when used for the flat field correction.


4) Influence on SNR
The most important question was whether the SNR of the end result is influenced by applying a normal MF (with different illumination of channels) to OSC data. In the Cloudy Night thread the assumption of some people was that the not modified MF would degrade SNR during integration (if I understood it correctly). I did not find such an effect when I compared an integration of light frames calibrated with either MF and would like to hear the arguments of the experts.


Bernd
 

Attachments

  • MF.JPG
    MF.JPG
    73.2 KB · Views: 112
thank you for posting this - i had hoped the CN poster would ask over here and as you say s/he never did, so it is good to have juan's input on this. the SNR problem is also was was concerning to me, but i was too lazy to try to reproduce it.

rob
 
Hi Bernd - With so many experts here on the forum I am a little anxious about commenting. However, here goes.

I too have a OSC but mine is a Canon DSLR (modified). I use a LED Lightbox as the extended source. I had the same problem you have with the different channels having significantly different signal levels.  In my case the Green and Blue channels had significantly higher signal than the Red channel. To balance this out I put a piece of Red cellophane (actually looked pink) between the lightbox and the telescope. This brought the Red channel significantly closer to the G/B channels and has worked well. The advantage of this approach is that (1) it improves the SNR for the Red channel, and, (2) I don't have  to worry about how to rescale the Red while properly maintaining any offsets at the correct level (a DSLR problem).

Even though you properly rescale the weaker channel(s), the weaker channel(s) will still have a lower SNR. That is the only point I wish to make. The colored cellophane helps overcome this problem.

Thanks for looking,

Steve
 
i think #4 is the most important question. my answer to anyone that asks about creating grey OSC flats is that it doesn't matter - after all, if you are using a mono camera and filters no one ever tries to make sure the flats are balanced against one another, only that they are well exposed. i realize that "well exposed" usually ends up with each flat having a similar histogram, and thus if you made an RGB image out of the flats it would be nearly grey, but that is a side effect, not a goal.

the guy at CN was saying that the SNR of the red channel of the integrated image was lower if he did not first normalize his flat. i can believe that because of the way PI weights integration input images, that there could be something to this. however it looks like bernd could not reproduce that.

regardless, there's only one scaling factor computed for a CFA flat, so it does make sense that a dimmer flat channel will be artificially boosted during the subsequent division into the light. if that has a knock-on effect of creating a suboptimal integration for the weak-flat channel, then that's bad.

if it were not for this, i too would just say "hey just do BN and CC and forget about it, as long as you have enough flat subs such that the weak flat channel has a decent SNR". but if the SNR loss is real then it bears investigation.

rob
 
pfile said:
i think #4 is the most important question.
I agree.

@Steve:
I don't want to attenuate the blue and green channel by a red cellophane. I have no trust that such a plastic sheet is optically homogeneous - and if not, new artifacts will be introduced into the calibrated images by such an approach.

In my view, the SNR of the original MasterFlat should be high enough in all channels in order not to introduce relevant SNR differences to the channels of the calibrated light frames because of the high signal in the MF, regardless of whether the original mean is 19k or 36k in my example.


@Niall:
I understand very well what is causing the color shift in PI's flat field correction of OSC data, and I explained it in my post above in detail.

I think my PixelMath expression elegantly does the same job as a laborious process consisting of the following steps:
1) SplitCFA (of each light frame and the MF),
2) perform the flat field correction per channel and
3) then merge the channels of each calibrated light frames with MergeCFA.
In this process, each light frame has to be split and each calibrated light frame has to be merged (the MergeCFA process is not automatable up to now!). The calibration has to be performed on each channel (four times per light frame). So the advantage of my approach is the low effort: only the MF has to be split (in order to get the mean values of the CFA channels of the MF). The correction (PixelMath expression) is applied to the bayered MF - that's it. The rest is normal workflow of calibration using the modified MF which will produce a "color cast free" flat field correction.

-----

Whether the modification of a MF is useful or not in terms of producing a "color cast free" MF (e.g. for a convenient judgment of flat field correction results) is a matter of taste, but in the Cloudy Nights thread a different reason for the modification was expressed, and this was my primary goal for starting this thread: some people claimed that when a correction of the MasterFlat was NOT performed, a deterioration of the SNR of the weak channel in the calibrated light frames would result. It was presumed that there is a catch in subsequent processes, especially in the ImageIntegration process, that would influence the SNR of the final integration result. It is this point that I want to have elucidated. I guess it can only be answered by Juan.

I did not find the claimed effect when visually comparing final integrations (one calibration performed with the normal MF, the other calibration performed with the modified MF). However it is not obvious to me and I really would like to know how to accurately measure differences in SNR of two integration results.

Bernd

 
I was the OP of the CN thread referenced here.

pfile said:
- after all, if you are using a mono camera and filters no one ever tries to make sure the flats are balanced against one another, only that they are well exposed.

Exactly, and that's the way it should be - but not the way it works with the current PI calibration of OSC data.  The idea is that the flat application should not change the median value of the data much, which is what happens with mono data.  But with PI's current calibration process for OSC data, the median values of each of the four CFA channels are changed, by effectively normalizing them to the mean value of the entire image (all four CFA channels lumped together).  This is obvious when you observe that the color balance of the calibrated OSC images shifts, and if the color balance of the flat is strongly non-neutral, the color balance of the calibrated OSC files can be shifted very substantially.

pfile said:
the guy at CN was saying that the SNR of the red channel of the integrated image was lower if he did not first normalize his flat. i can believe that because of the way PI weights integration input images, that there could be something to this. however it looks like bernd could not reproduce that.

I am not sure that SNR measurements of the integrated result capture the extent of the post-processing challenges I see with OSC data calibrated in PI.  What I frequently see is an abundance of blotchy red "noise" in the low-SNR areas of the integrated result that is difficult to reduce substantially.  I know from the comments of other OSC imagers that I am not the only person who experiences this.  MLT reduces this "noise" some, and ACDNR reduces it more, but it is always there in images from subs calibrated in PI normally.  Calibrating the light subs with a balanced master flat helps reduce this noise significantly.

As I said in the CN post, before PI I used Iris to pre-process OSC data.  Iris was a very very capable application, but is now very out of date.  The final step in creating the master flat in Iris was to use the GREY_FLAT command, which spit the CFA channels out, normalized them, then recombined them into a color-balanced master flat.  Though I have never seen this documented as part of anyone's PI workflow, I know that some PI users split all calibration and light frames with SplitCFA and process each of the four channels separately, which accomplishes about the same thing, though it is pretty time-intensive.  Balancing the CFA master flat instead, as I documented, works for me.  It is not important which CFA channel is chosen as the reference to normalize the other three channels.  It is also effective in my experience to use MergeCFA to create a master flat in which only one of the four CFA channels is used in all four positions of the new master flat.  The new master flat produced by either of these methods has much lower noise than the original master flat, as measured by PI's ContrastBackgroundNoiseRatio script, which is entirely predictable if you think about it, so the application of this balanced master flat will inject less noise into the calibrated lights - but the real benefit in my experience is that it does not boost the red channel, as an unmodified twilight sky flat does.  Of course the integrated image will still need to be color balanced, but in my experience the blotchy red "noise" remains in the background following BN/CC or PhotometricColorCalibration.

When I made that post on CN I did not imagine that it would generate so much comment and controversy - but comment and controversy are good things.  I would like to see PI introduce an option in ImageCalibration for identifying the master flat as a CFA image (as it already has for the master dark), and that the master flat would then be applied to each of the four CFA channels in the light subs independently.  PI would not need to know the CFA pattern, only that the master flat (and by inference the light subs) is a CFA image.  I know this change would not be trivial, but I believe it would be valuable.

Cheers,

Don
 
What I frequently see is an abundance of blotchy red "noise" in the low-SNR areas of the integrated result that is difficult to reduce substantially.  I know from the comments of other OSC imagers that I am not the only person who experiences this.  MLT reduces this "noise" some, and ACDNR reduces it more, but it is always there in images from subs calibrated in PI normally.  Calibrating the light subs with a balanced master flat helps reduce this noise significantly.

Can you upload a data set where this problem can be reproduced? A flat frame is always applied using pixel-to-pixel linear arithmetic operations. Hence I can't figure out how a (valid, correctly acquired) flat frame could cause 'blotches'. From my experience, those blotchy artifacts are just the result of insufficient signal. The only way to remove them is to acquire more data, although this is not the answer most amateurs want to hear. I don't use comments or opinions, but verifiable facts based on actual data and reproducible analyses.

The new master flat produced by either of these methods has much lower noise than the original master flat

As long as you only apply linear operations to an image (hopefully multiplications, exclusively, in the case of a master flat) without truncation caused by numeric overflow/underflow, there is no way you can increase or decrease the noise in the image. The signal and the noise will be multiplied equally, which is a no-op in terms of the signal to noise ratio.

For accurate noise evaluations that can be compared consistently for different images see this post, where I provide a little script to compute scaled noise estimates.

I would like to see PI introduce an option in ImageCalibration for identifying the master flat as a CFA image (as it already has for the master dark), and that the master flat would then be applied to each of the four CFA channels in the light subs independently.

I will write this feature eventually, if only to stop so many complaints about color casts introduced by flat calibration with mosaiced data. Don't hold your breath, however, because I have a very long list of priorities before investing time in this. In my opinion, unless somebody can show evidence of the contrary, these color casts are no real problem: you can avoid them for visualization of linear data very easily with an unlinked STF, and they can be removed without problems using a color calibration tool correctly, especially since we implemented the PhotometricColorCalibration tool.

The purpose of a master flat frame is to correct for field illumination irregularities and pixel-to-pixel sensitivity variations, not to perform any color balancing of the raw data. If you multiply the flat frame by a constant (again, without truncation) nothing changes in terms of noise and signal. It is a purely cosmetic thing. After the necessary color calibration step, the result will be exactly the same (modulo roundoff and truncation errors, which are immaterial). I always stand to be corrected if wrong, of course, if you can show me a reproducible example of the contrary.
 
The new master flat produced by either of these methods has much lower noise than the original master flat

As long as you only apply linear operations to an image (hopefully multiplications, exclusively, in the case of a master flat) without truncation caused by numeric overflow/underflow, there is no way you can increase or decrease the noise in the image. The signal and the noise will be multiplied equally, which is a no-op in terms of the signal to noise ratio.

For accurate noise evaluations that can be compared consistently for different images see this post, where I provide a little script to compute scaled noise estimates.

Juan, I assume you did not understand my statement about the fact that balancing the color channels in the master flat reduces the noise in the master flat significantly - or perhaps I did not understand your reply.  When you use either of the methods I described to balance the four CFA channels, you are not multiplying all of the image pixels by a constant value, you are multiplying the pixels belonging to each position in a 2x2 pixel matrix by a different constant.  Of course this will change the noise level.  And if by performing this multiplication you are making the values of the four pixels in each 2x2 matrix closer to each other, of course the noise level will be lower in the result.

As measured by your ScaledNoiseEvaluation script, normalizing CFA channels 1, 2 and 3 in my twilight sky master flat with LinearFit using CFA0 as the reference results in a reduction of noise by a factor of 10.  And replacing CFA channels 1, 2 and 3 with CFA0 reduces the noise by a factor of 20. 

mf_noise.jpg


But we should not assign too much significance to the measured noise level in a CFA image, since adjacent pixels in CFA images represent effectively different signal components, due to the Bayer filter.

A flat frame is always applied using pixel-to-pixel linear arithmetic operations. Hence I can't figure out how a (valid, correctly acquired) flat frame could cause 'blotches'.

I did not state and did not mean to imply that the application of the flat frame causes the red "blotches".  I believe that light pollution plays a major role in that.  But the unnecessary boosting of the red channel's amplitude by the flat application seems to be a factor in making them more difficult to remove in post-processing.  I cannot express a theory that accounts for this, but I can make the statement based on my observations.

I will put together a data set and upload it later today.  Can you point me to a document describing the procedure to upload a large file to PI's server?  Or should I place the file on my own server?
 
Hi Don,

1) As I wrote already in the CN thread, one must not apply LinearFit to the channels of a MF. LinearFit uses multiplicative AND additive operations. Additive operation must not be executed on a MF, or it will be invalidated. If you want to modify the MF, you will have to use only multiplicative operations.

2) The noise estimation of a still bayered MF is not a reasonable judgement. The noise evaluation can be applied to RGB images or monochrome images, not to bayered "images" (OK, you can, but the results are meaningless). I had to learn as well, that bayered "images" are not images at all but only representations of raw data which are sometimes helpful.

Bernd
 
Hi Don,

I understood you. By performing the multiplications that you are describing you are not changing anything in terms of signal and noise, as far as they don't lead to overflow. Hence, your normalized master flat frame has exactly the same signal and the same noise as the original frame. There is no way this could be otherwise with just multiplications; it's simple arithmetics.

The script that I linked in my previous post cannot be used on CFA frames. This is because the script assumes that the input image is an array of separate components (or a multichannel image, or a tensor). This is not true for a CFA frame, where four image components have been distributed sparsely over a single matrix. Obviously, this alters the spatial distribution of the noise with respect to the underlying RGB image, and hence you are getting meaningless results.

Try with this script instead (call it ScaledNoiseEvaluationBayerCFA):

Code:
/**
 * Estimation of the standard deviation of the noise, assuming a Gaussian
 * noise distribution.
 *
 * - Use MRS noise evaluation when the algorithm converges for 4 >= J >= 2
 *
 * - Use k-sigma noise evaluation when either MRS doesn't converge or the
 *   length of the noise pixels set is below a 1% of the image area.
 *
 * - Automatically iterate to find the highest layer where noise can be
 *   successfully evaluated, in the [1,3] range.
 *
 * Returned noise estimates are scaled by the Sn robust scale estimator of
 * Rousseeuw and Croux.
 */
function ScaledNoiseEvaluation( image )
{
   let scale = image.Sn();
   if ( 1 + scale == 1 )
      throw Error( "Zero or insignificant data." );
   
   let a, n = 4, m = 0.01*image.selectedRect.area;
   for ( ;; )
   {
      a = image.noiseMRS( n );
      if ( a[1] >= m )
         break;
      if ( --n == 1 )
      {
         console.writeln( "<end><cbr>** Warning: No convergence in MRS noise evaluation routine - using k-sigma noise estimate." );
         a = image.noiseKSigma();
         break;
      }
   }
   this.sigma = a[0]/scale; // estimated scaled stddev of Gaussian noise
   this.count = a[1]; // number of pixels in the noisy pixels set
   this.layers = n;   // number of layers used for noise evaluation
}

/*
 * Returns a Bayer CFA frame converted to a 4-channel image. Individual CFA
 * components are written to output channels 0, ..., 3 as follows:
 *
 * 0 | 1
 * --+--
 * 2 | 3
 *
 * Where CFA element #0 corresponds to the top left corner of the input frame.
 * The output image will have half the dimensions of the input frame.
 *
 * If specified, the k0, ..., k3 scalars will multiply their respective output
 * channels 0, ..., 3.
 */
function BayerCFAToFourChannel( image, k0, k1, k2, k3 )
{
   if ( k0 == undefined )
      k0 = 1;
   if ( k1 == undefined )
      k1 = 1;
   if ( k2 == undefined )
      k2 = 1;
   if ( k3 == undefined )
      k3 = 1;
   
   let w = image.width;
   let h = image.height;
   let w2 = w >> 1;
   let h2 = h >> 1;
   let rgb = new Image( w2, h2, 4 );
   for ( let y = 0, j = 0; y < h; y += 2, ++j )
      for ( let x = 0, i = 0; x < w; x += 2, ++i )
      {
         rgb.setSample( k0*image.sample( x,   y   ), i, j, 0 );
         rgb.setSample( k1*image.sample( x+1, y   ), i, j, 1 );
         rgb.setSample( k2*image.sample( x,   y+1 ), i, j, 2 );
         rgb.setSample( k3*image.sample( x+1, y+1 ), i, j, 3 );
      }
   return rgb;
}

#define K0 1.0
#define K1 1.0
#define K2 1.0
#define K3 1.0

function main()
{
   let window = ImageWindow.activeWindow;
   if ( window.isNull )
      throw new Error( "No active image" );

   console.show();
   console.writeln( "<end><cbr>
<b>" + window.currentView.fullId + "</b>" );
   console.writeln( "Scaled Noise Evaluation Script - Bayer CFA Version." );
   console.writeln( "Calculating scaled noise standard deviation..." );
   console.flush();

   console.abortEnabled = true;

   let image = BayerCFAToFourChannel( window.currentView.image, K0, K1, K2, K3 );
   console.writeln( "<end><cbr>
Ch |   noise   |  count(%) | layers |" );
   console.writeln(               "---+-----------+-----------+--------+" );
   for ( let c = 0; c < image.numberOfChannels; ++c )
   {
      console.flush();
      image.selectedChannel = c;
      let E = new ScaledNoiseEvaluation( image );
      console.writeln( format( "%2d | <b>%.3e</b> |  %6.2f   |    %d   |", c, E.sigma, 100*E.count/image.selectedRect.area, E.layers ) );
      console.flush();
   }
   console.writeln(               "---+-----------+-----------+--------+" );
}

main();

This script can only be used on monochrome Bayer CFA frames (it cannot be used on XTrans CFA frames). To facilitate your tests, I have included the possibility to apply four different multiplying factors, namely K0, K1, K2 and K3, to their respective Bayer CFA components. You can vary these multiplying factors as you want (their default values are 1.0); no matter how you vary them, the resulting noise estimates will be identical, as expected.

I did not state and did not mean to imply that the application of the flat frame causes the red "blotches".  I believe that light pollution plays a major role in that.  But the unnecessary boosting of the red channel's amplitude by the flat application seems to be a factor in making them more difficult to remove in post-processing.

After calibration, debayering, registration, integration and drizzle integration (drizzle is the *only* way to achieve an optimal result with CFA raw data IMO), you'll have to apply color calibration to the linear image, as a necessary first processing step (IMO, color calibration should actually be considered as part of the preprocessing stage). Color calibration will multiply each color channel by a constant scalar. Background neutralization will subtract a constant scalar from each channel. Both operations consist purely of linear arithmetic operations. Irrespective of how you multiplied the master flat frame before calibration, the starting image for processing, after color calibration and background neutralization, will be the same. Hence I don't understand the added difficulty that you are describing.

Since many years ago, I never read public astronomy forums other than PixInsight Forum, let alone participating in them. So I cannot know, and don't want to know, what is being said about PixInsight on the CN forum thread you are referring to. Speaking of technical image processing topics, I am only interested in verifiable facts, not in opinions or impressions.
 
Back
Top