Reading ADU from raw files.

V0LDY

Member
Hi, I'm trying to make a script that extracts pixel values from an image to analyze the presence of noise reduction algorithms in DSLRs.
I need to be able to read the raw ADU values for each channel, and what I've done is importing the RAW file as a "rawBayer CFA image" in order to have an RGB image with each pixel's channels being either 0 or the single color value.
I then read the value with the image.sample function targeting the window of the imported image.

So far so good, I get the values of the pixels but I have doubts about the data I'm getting.
First of all: am I really seeing the sensor ADUs? I'm pretty sure not, because I tried with different cameras and I'm always getting the classic 0,#### values in the viewer, even in 8bit jpg files.
Second, when I try to plot the pixel values using console.write("ADU: ",image.sample(x,y,0)) the values I aren't 0,#### but something like a 0 followed by 16 digits (which I guess are either random or stuff already in the memory that is just added as it is) that probably needs to be truncated to 4th decimal place to have a meaninful value.

So... what's the best way to read actual ADUs from the sensor if possible?
 
The Image object of our JavaScript runtime is an abstraction of an actual image living in the core PixInsight application. When you read or write pixel samples to an Image object you always work in the normalized [0,1] range using 64-bit floating point numbers, irrespective of the actual format that the image uses internally to store its pixel data.

To obtain pixel values in real 16-bit data numbers or ADU units, simply multiply what you get from an Image object by 65535. For example:

console.writeln( format( "ADU: %.0f", image.sample( x, y, 0 )*65535 ) );

It's just that simple. Let me know if this is what you want.
 
The Image object of our JavaScript runtime is an abstraction of an actual image living in the core PixInsight application. When you read or write pixel samples to an Image object you always work in the normalized [0,1] range using 64-bit floating point numbers, irrespective of the actual format that the image uses internally to store its pixel data.

To obtain pixel values in real 16-bit data numbers or ADU units, simply multiply what you get from an Image object by 65535. For example:

console.writeln( format( "ADU: %.0f", image.sample( x, y, 0 )*65535 ) );

It's just that simple. Let me know if this is what you want.

I suspected something like that was going on, but I can't get my head around the 65535 multiplication factor.
Wouldn't it be correct for a 16 bit image only? Should I change the value for each camera I want to test to get the correct ADU reading?
 
It depends on what you want to do with the data.

The data in a proprietary raw format of a regular digital camera contain intensity values in the bit depth of the analog digital converter (ADC), e.g. for a 14-bit ADC in the range of 0 to 2^14 - 1 = 16383. In this case, the true intensity value is always obtained by multiplying with factor 65535.

However, in FITS files the data usually are scaled to 16 bit. For a camera with 14-bit ADC this means: the data are multiplied by factor 4, the range is 0 to 2^16 - 1 = 65535. There are very few exceptions from the rule that data in FITS files are scaled to [0,65535]. Some cameras of make Moravian Instruments don't scale the data. You can check whether the data are scaled in the histogram when a high horizontal zoom (400) and '16-bit (64K)' levels are applied in HistogramTransformation: scaled data exhibit gaps in the histogram.

If you want to calculate e.g. the real conversion gain (in electrons per DN) of your camera that outputs FITS files, you will have to take into account the scaling applied by the camera dirver. For FITS files coming from a camera with 14-bit ADC, this normally means: multiplication by factor 2^14 - 1 = 16383 will result in the true intensity values.

As long as you want to process the data to obtain an image, the only crucial point is that the SAME factor is used for light frames and all calibration frames. The end result will be scaled as well, but for the appearance of the image this scaling is meaningless.

Bernd
 
It depends on what you want to do with the data.

The data in a proprietary raw format of a regular digital camera contain intensity values in the bit depth of the analog digital converter (ADC), e.g. for a 14-bit ADC in the range of 0 to 2^14 - 1 = 16383. In this case, the true intensity value is always obtained by multiplying with factor 65535.

However, in FITS files the data usually are scaled to 16 bit. For a camera with 14-bit ADC this means: the data are multiplied by factor 4, the range is 0 to 2^16 - 1 = 65535. There are very few exceptions from the rule that data in FITS files are scaled to [0,65535]. Some cameras of make Moravian Instruments don't scale the data. You can check whether the data are scaled in the histogram when a high horizontal zoom (400) and '16-bit (64K)' levels are applied in HistogramTransformation: scaled data exhibit gaps in the histogram.

If you want to calculate e.g. the real conversion gain (in electrons per DN) of your camera that outputs FITS files, you will have to take into account the scaling applied by the camera dirver. For FITS files coming from a camera with 14-bit ADC, this normally means: multiplication by factor 2^14 - 1 = 16383 will result in the true intensity values.

As long as you want to process the data to obtain an image, the only crucial point is that the SAME factor is used for light frames and all calibration frames. The end result will be scaled as well, but for the appearance of the image this scaling is meaningless.

Bernd

I wrote a script to plot graphs that help visualize if a DSLR has baked in noise reduction algorithms, it actually works the same even if I use the Pixinsight uncorrected values, it's just that having the true ADUs would make graphs more readable, but I guess that I'd need to know the exact bit depth of each camera's output to have the correct scale.
 
As I wrote above: if you evaluate files in a proprietary raw format (e.g. Canon: CR2 or CR3, Nikon: NEF, Sony: ARW, Fujifilm: RAF, Pentax: PEF or Adobe's DNG format) of a regular digital camera, use the multiplication factor of 65535. This will provide the values that are stored in the file -- there is no need for a further correction.

The question whether the data in the file are indeed "raw" (= unprocessed) is a different one.

I used Canon cameras (EOS 20D and EOS600D) and evaluated files from several Canon digital cameras. As far as I can tell, Canon does not seem to apply algorithms that distort the raw data.

However, Nikon and Sony (and possibly more manufacturers) definitely apply algorithms that

- suppress hot pixels,
- apply digital spatial filtering,
- apply white balance or
- apply lossy (irreversible) compression.

The latter case can be detected by inspecting the histogram (see post #5 of this thread). Gaps of varying width in the histogram indicate this kind of in-camera preprocessing.

Note that these algorithms are applied in the camera and not on the sensor. Of course the data resulting from the application of such algorithms are not "raw" any more. In my view, affected camera models are not suited for astrophotography, because a correct image calibration is made impossible.

Bernd
 
Back
Top