Author Topic: Flat Fields in processing dslr images  (Read 11455 times)

Offline dancolesworthy

  • Newcomer
  • Posts: 12
Flat Fields in processing dslr images
« on: 2012 February 12 17:27:07 »
I think I understand that during calibration of "lights", the final step is flat fielding.  Which roughly stated division of the net light image (light - dark - bias) by the calibrated master flat.  As I understand it, each pixel in master flat is calculated as:

PFlat[i,j] = (PFlat[i,j] - min(PFlat)) / (max(PFlat) - min(PFlat).  before the division occurs.

Berry and Burnell in "The Handbook of Astronomical Image Processing" suggest that for cfa images, this should occur separately for each color channel in the cfa array. 

The process described in the Master Calibration Frames: Acquisition and Processing does not cover this option, nor does it appear to happen during light calibration.    Since a few "hot" pixels leaked through the pixel rejection algorithms during flat frame calibration and integration, they do adversely affect the overall color balance of the calibrated, integrated lights.  Specifically, a few"hot" pixels confined to the red channel of the flat master skew the subsequent flat frame scaling. 

So there are really two questions here:  1)  How can color channel flat fielding be performed in Pixinsight and 2)  Is there a "simple" way to fix the hot pixels in the master flat frame.

Thanks in advance.

Dan Colesworthy
ß

Offline pfile

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 4729
Re: Flat Fields in processing dslr images
« Reply #1 on: 2012 February 12 18:03:07 »
when you say "this should occur separately fore each color channel..." do you mean the computation of the normalization should happen 3 times? just trying to understand what you've written.

hot pixels in the flat subs should be taken care of during calibration of the flat, no? if necessary calibration of the flat subs can be done separately with matching dark flats.

Offline dancolesworthy

  • Newcomer
  • Posts: 12
Re: Flat Fields in processing dslr images
« Reply #2 on: 2012 February 13 12:50:32 »
Yes the computation should occur 3 times, once for each color channel.  The min(PFlat) and max(PFlat) should be computed separately for each color channel in the color filter array.  In the case of my dslr (5dMkII), the pattern is RGGBRGGB, which translates into
Row 0:  RGRG...
Row 1:  GBGB...
Row 2:  RGRG...
Row 3:  GBGB...

I have attached a histogram of the master flat with individual color channels extracted.  There is roughly a full stop difference between the sensitivity of the green channels and the red channel, with the blue channel in between.

Dan

Offline Nocturnal

  • PixInsight Jedi Council Member
  • *******
  • Posts: 2727
    • http://www.carpephoton.com
Re: Flat Fields in processing dslr images
« Reply #3 on: 2012 February 13 13:23:41 »
DeepSkyStacker calibrates like that. Nebulosity depends on a 2x2 filter to essentially average the colors. There is no particular reason to flatten each channel separately from color sensitivity perspective. As you said, each color gets normalized so the absolute flat values are immaterial. What matters is of course the SNR which is why you want your flats to be white, each color with comparable signal levels and therefore SNR.

The only reason you would process each color separately while flattening is if your optical train has color dependent elements in it. A dust mote could, for example be more transparent in blue than red. I am not convinced this is a practical concern. I think that a single identical master flat frame for all 3 channels is sufficient. If you have an example that shows where channel dependent flat frames are a benefit I'd be interested to see it of course. I've read HAIP :)

As for the 'full stop difference' statement I think you may want to forget about terrestrial photography terms. They aren't used a whole lot for astro photography. A certain channel simply has x % higher ADUs. Much more exact. I'm also not convinced this says anything about relative sensitivity unless you are certain the source was true white light. In any event, DSLR sensors are generally optimized for green sensitivity. In addition you get twice the pixels so green is very well represented. Not very useful for astro photography but nice to terrestrial work.
Best,

    Sander
---
Edge HD 1100
QHY-8 for imaging, IMG0H mono for guiding, video cameras for occulations
ASI224, QHY5L-IIc
HyperStar3
WO-M110ED+FR-III/TRF-2008
Takahashi EM-400
PIxInsight, DeepSkyStacker, PHD, Nebulosity

Offline dancolesworthy

  • Newcomer
  • Posts: 12
Re: Flat Fields in processing dslr images
« Reply #4 on: 2012 February 13 16:59:42 »
The original question had to do with eliminating a few "hot pixels" from the master flat.  Here is a 5x5 array extracted from the master flat and converted back to ADU values, followed by CFA colors:

[  337,   795,   381,   759,   340],               R,G,R,G,R
[  767,   593,   747,   555,   768],               G,B,G,B,G
[  337,   721, 12138,   779,   346],             R,G,R,G,R
[  754,   576,   737,   557,   713],               G,B,G,B,G
[  391,   747,   384,   741,   364]]               R,G,R,G,R

Hopefully, the hot pixel is obvious.  It is a red pixel, and no the same pixel isn't "hot" in the darks and is only "warm" (2084) in the bias frames.

Quote
There is no particular reason to flatten each channel separately from color sensitivity perspective. As you said, each color gets normalized so the absolute flat values are immaterial. What matters is of course the SNR which is why you want your flats to be white, each color with comparable signal levels and therefore SNR.

The only reason you would process each color separately while flattening is if your optical train has color dependent elements in it. A dust mote could, for example be more transparent in blue than red. I am not convinced this is a practical concern. I think that a single identical master flat frame for all 3 channels is sufficient. If you have an example that shows where channel dependent flat frames are a benefit I'd be interested to see it of course. I've read HAIP

Assuming a minimum value of 0 elsewhere in the array, then if you divide the above values by 12138 you will get "flat field normalization factors" for each pixel.  Proceed to divide the original values by the flat field normalization values and the result will be an array with all values set to 12138.  Yes it is flat, but the original relationship between the color channel lumonosity values is lost.  Even if we "fix" the hot pixel, flattening will distort the color channel luminosity relationship.

Quote
As for the 'full stop difference' statement I think you may want to forget about terrestrial photography terms. They aren't used a whole lot for astro photography. A certain channel simply has x % higher ADUs. Much more exact. I'm also not convinced this says anything about relative sensitivity unless you are certain the source was true white light. In any event, DSLR sensors are generally optimized for green sensitivity. In addition you get twice the pixels so green is very well represented. Not very useful for astro photography but nice to terrestrial work.

You are correct about the stop statement.  However, if you will look at the data above you will see that the green channel values are in fact about 2x the red values, and near 1.5x the blue values.  This corresponds pretty closely to the "daylight" color balance factors in the original Canon CR2 file exif data, and also corresponds with quantum efficiency measurements of the 5DMkII's sensor published by Christian Buil at http://www.astrosurf.com/buil/50d/test.htm

In any event, back to the original question.  Is there a way to find and fix hot pixels such as the one illustrated above in PixInsight?  I am only a couple of weeks into the learning curve at this point, and although I have searched/read a lot of threads on the forum and checked at least some of the tutorials, I haven't found an answer yet.

Thanks in advance,

Dan

Offline pfile

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 4729
Re: Flat Fields in processing dslr images
« Reply #5 on: 2012 February 13 19:53:37 »
http://pixinsight.com/forum/index.php?topic=1828.0

do you think these hot pixels are the result of cosmic ray hits or something? it seems strange that they are not in the dark or bias.

Offline Carlos Milovic

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2172
  • Join the dark side... we have cookies
    • http://www.astrophoto.cl
Re: Flat Fields in processing dslr images
« Reply #6 on: 2012 February 14 04:51:26 »
Flat field frames do not need to be "normalized" like that. All you need is to subtract the bias and/or dark. Usually, to preserve the total flux value of the lights, normalization to the flat is done by dividing by its mean or median value. This way, a normalized flat have values over 1.0 and below 1.0.

I don't see the purpose on subtracting the minimum value, since that destroys the multiplicative relationship between the lights and the flats.

For CFA cameras, you don't need to work on separate channels. Just work with the gray bayered data. In fact, this procedure should guarantee the highest correlation between your lights and all the calibration frames.
Hot pixels may be removed with DefectMap, or with CosmeticCorrection. But, since hot pixels are pretty consistent between frames, I would not remove them from the calibration frames, but just to the calibrated lights (since they'll have the same hot pixels).
Regards,

Carlos Milovic F.
--------------------------------
PixInsight Project Developer
http://www.pixinsight.com

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: Flat Fields in processing dslr images
« Reply #7 on: 2012 February 16 03:17:51 »
Quote
PFlat[i,j] = (PFlat[i,j] - min(PFlat)) / (max(PFlat) - min(PFlat).  before the division occurs.

This describes a rescaling operation to the [0,1] range. That is incorrect if applied to a flat frame. If this is done, the original illumination profile in the flat frame is lost, which draws it useless.

The standard light frame calibration process applied in our ImageCalibration tool can be described as the following equation (ignoring overscan correction for simplification):

Ical = (I - B - OptimizeDark( D, I, B )) * Mean( F ) / Max( F, tiny )

where Ical is the output calibrated light frame, I is the input raw light frame, B is the master bias frame, D is the master dark frame, and F is the master flat frame. OptimizeDark() is a function applied to the master dark frame to minimize noise induced in the calibrated image. Dark optimization (aka dark scaling) depends on the master dark, master bias and raw light frame. In the current version of our ImageCalibration tool, OptimizeDark() is a linear function, but this will change very soon.

Mean() is the arithmetic mean of an image. Multiplication by the flat's mean makes the flat fielding division independent on the flat's levels. Only the illumination profile is important in the flat fielding process. Instead of the mean, one can use a more robust statistic to represent the true average level in the master flat frame. In our implementation we compute the mean exclusively for master flat pixels within the [0.00002, 0.99998] range. This excludes dead and hot pixels, which robustifies the process minimally. Although we tend to favor robust statistics methods in all our implementations, using the mean for flat normalization is standard, and we have adhered to customary practice in this case. Maybe we'll revise this in future versions.

Finally, Max( F, tiny ) prevents any divisions by zero, which may occur due to dead pixels and similar artifacts in the master flat frame. tiny is a very small but significant number. All computations are performed in double precision arithmetics for all supported data types.

For color or multichannel data, this process is applied independently to each channel. Image channels are treated as independent images and calibrated separately. If the master flat frame is a single-channel image and one or more light frames are RGB images, the single flat channel is applied to all RGB channels. However, this is, in general, incorrect: each channel must be calibrated separately. For DSLR CFA data, raw frames can be in two different formats:

- Color CFA format. In this format each component of the CFA is stored as an independent channel of an RGB image: the red and blue channels have a 75% of black pixels, and the green channel has a 50% of black pixels. This format has the advantage that dark frame optimization can be computed more accurately. Its drawback is that it requires three times the storage space of a monochrome CFA.

- Monochrome CFA format. The CFA image is just the original raw frame acquired by the camera: a single-channel image with the CFA pattern superimposed.

Both formats are equivalent and available on the DSLR_RAW Preferences dialog box. For OSC CCD images, only the monochrome CFA format is available AFAIK.

Quote
a 2x2 filter to essentially average the colors.

Again, this is incorrect. Flat fielding must be applied pixel-by-pixel. The master flat frame (if correctly acquired and generated) contains illumination variations at the pixel level that not only depend on the optical train, but on the sensor. If a low-pass filter (such as an averaging filter) is applied to a flat frame, all pixel-to-pixel variations will be destroyed.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: Flat Fields in processing dslr images
« Reply #8 on: 2012 February 16 04:14:22 »
Quote
The original question had to do with eliminating a few "hot pixels" from the master flat.  Here is a 5x5 array extracted from the master flat and converted back to ADU values, followed by CFA colors:

[  337,   795,   381,   759,   340],               R,G,R,G,R
[  767,   593,   747,   555,   768],               G,B,G,B,G
[  337,   721, 12138,   779,   346],             R,G,R,G,R
[  754,   576,   737,   557,   713],               G,B,G,B,G
[  391,   747,   384,   741,   364]]               R,G,R,G,R

Hopefully, the hot pixel is obvious.  It is a red pixel, and no the same pixel isn't "hot" in the darks and is only "warm" (2084) in the bias frames.

Hot pixel undercorrection is a drawback of our current dark frame optimization algorithm. Dark frame optimization minimizes noise induced by master dark frame subtraction in the (bias-subtracted) light frame. However, due to the nonlinearity of hot pixels, the computed dark scaling factor is always incorrect for them, which leads to undercorrection. Sometimes a dark scaling factor greater than one leads to overcorrection, which is what seems that has happened in your example. We think that the benefits of dark frame optimization are well worth the issue of having some bad pixels unfixed, which anyway should be rejected during the integration process.

We have a new dark frame optimization algorithm tested and ready for implementation, which we call multipoint dark frame optimization. It will compute a nonlinear scaling curve and hence these problems will be gone in most cases.

The workaround, however, is rather simple (most times). On one hand, all hot and cold pixels should be rejected during integration, provided you dither your exposures sufficiently. As Carlos says, to fix any remaining wild pixels in the final integrated image you can use the DefectMap or CosmeticCorrection tools.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Carlos Milovic

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2172
  • Join the dark side... we have cookies
    • http://www.astrophoto.cl
Re: Flat Fields in processing dslr images
« Reply #9 on: 2012 February 18 03:13:17 »
I'd rather correct the bad pixels before registration and integration. This should prevent artefacts like lines on the images (from the drift of hot pixels not rejected).
Regards,

Carlos Milovic F.
--------------------------------
PixInsight Project Developer
http://www.pixinsight.com