Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - bulrichl

Pages: [1] 2
1
Gallery / Soul Nebula (IC 1848)
« on: Today at 05:26 »
This is the first image that I want to show in this forum. Emission nebuala IC 1848, in Cassiopeia, also known as Soul Nebula. Captured with a Takahashi FSQ 106 and ZWO ASI071, gain 90, offset 65, T = -10 °C, 154 x 5 min = 12 h 50 min from my terrace in Tijarafe, La Palma at 19th, 26th und 29th November 2019.

https://www.dropbox.com/s/tycm3i0mqwh2wgk/IC1848.jpg

Complete processing with PixInsight, THE excellent software for the processing of astro images. At this point I want to express my gratitude to Juan Conejero for creating this wonderful program.

Bernd

2
There was a thread about "Preparing color-balanced OSC flats in PixInsight" https://www.cloudynights.com/topic/667524-preparing-color-balanced-osc-flats-in-pixinsight/
at the Cloudy Nights forum some time ago. Unfortunately no consensus was found and the thread came to nothing. Also the original poster did not answer my private messages lately. So I think that this topic is appropriate to be discussed here.

The main questions that were raised in the cited thread were:
1) How can we judge a mosaiced MasterFlat? (e.g. comparison of uncalibrated light frames with calibrated ones and with the MasterFlat)
2) Why does flat field correction of OSC data cause a very strong color cast?
3) How can the strong color cast be avoided?
4) Does the strong color cast after normal flat field correction of OSC data degrade the SNR of the images?


Appended is a typical MasterFlat of my ASI294MC Pro (gain 120, offset 30, -10 °C, flat field box undimmed, 40 flat frames, exposure time 0.003 s; the flat frames were calibrated with a MasterFlatDark and integrated) and its histogram.

The bayer/mosaic pattern of the camera is RGGB, PixInsight's assignmnet of the CFA channels is:
Code: [Select]
0  2    R  G
1  3    G  B


1) Judgement of the Flat field correction / the MasterFlat
The MasterFlat cannot be judged from this representation. The reason is that the bayered representation does not show any dust donuts due to the large intensity difference of CFA1, CFA2 (green, strong) and CFA0, CFA3 (red and blue, weak). The application of a STF Auto Stretch worsens the situation, the result beeing dimmer and not more contrasty.

In the debayered MasterFlat dust donuts can be made out, but the image has a very strong green color cast even if the 'Link RGB channels' option of STF is disabled.

In order to see the dust donuts one has to split the data into CFA channels with SplitCFA. The separated channels can be judged with STF Auto Stretch. With the above MF the Statistics look as follows ('Unclipped' option of Statistics disabled):

Code: [Select]
    MF         MF_CFA0 MF_CFA1 MF_CFA2 MF_CFA3
count (%)   100.00000 100.00000    100.00000    100.00000    100.00000
count (px)  11694368 2923592      2923592      2923592      2923592
mean        27879.936 19409.632    36101.452    36089.172    19919.488
median      32367.932 19356.783    36005.927    35992.244    19846.627
variance    68851118.203 641136.407  2022116.605 2014070.630 627450.905
stdDev      8297.657 800.710      1422.011    1419.180    792.118
avgDev      10296.331 843.728      1490.886    1487.439    827.184
MAD         15201.454 937.441      1648.974    1644.057    910.120
minimum     17190.099 17190.099    32367.932    32534.088    17866.335
maximum     39768.546 21401.955    39748.476    39768.546    22114.692


2) Color cast caused by Flat field correction
Let us recall how the flat correction is performed with OSC data in PixInsight:

Cal = (LF - MD) / MF * s0

Cal: Calibrated light frame
LF:  light frame
MD:  MasterDark
MF:  MasterFlat
s0:  Master flat scaling factor = mean(MF)

For OSC data, Pixinsight uses one and the same s0 in the flat field correction for each channel. For the present MF, the overall factors for each channel are:
CFA0: s0/mean(MF_CFA0) = 1.44
CFA1: s0/mean(MF_CFA1) = 0.77
CFA2: s0/mean(MF_CFA2) = 0.77
CFA3: s0/mean(MF_CFA3) = 1.40

From these data it is evident that the flat field correction causes a very strong color cast to violet when one compares the original light frames with the calibrated ones. So whereas the uncalibrated light frames have a strong green color cast, the calibrates ones have a very strong red color cast.


3) Modified MasterFlat
Actually the flat field correction should not use one s0 for all CFA channels but the mean of the particular channel for each channel separately. One can achieve this by modifying the original MF. Note that multiplying channels of a MF with a factor is an allowed operation provided that
- no rescale is performed and
- no clipping of data occurs.

So I set up a PixelMath expression that does the modification of the MF in the desired way, i.e. multiply each CFA channel with the mean of the channel with lowest signal (in this case CFA0) divided by the mean of the particular channel:

RGB/K:   iif(x()%2==0 && y()%2==0, $T, iif(x()%2==0 && y()%2!=0, m0/m1*$T, iif(x()%2!=0 && y()%2==0, m0/m2*$T, m0/m3*$T)))

Symbols: m0=mean(MF_CFA0), m1=mean(MF_CFA1), m2=mean(MF_CFA2), m3=mean(MF_CFA3)

Important: the 'Rescale result' option must be disabled!


The statistics of the MF modified in this way are as follows:

Code: [Select]
    MF_mod_CFA0     MF_mod_CFA1     MF_mod_CFA2     MF_mod_CFA3
count (%)   100.00000       100.00000       100.00000       100.00000
count (px)  2923592         2923592         2923592         2923592
mean        19409.632       19409.632       19409.632       19409.632
median      19356.783       19358.273       19357.501       19338.635
variance    641136.407      584509.319      582579.815      595741.680
stdDev      800.710         764.532         763.269         771.843
avgDev      843.728         801.562         799.981         806.011
MAD         937.441         886.555         884.216         886.825
minimum     17190.099       17402.336       17497.622       17409.031
maximum     21401.955       21370.422       21388.488       21548.648

This modified MF (as monochrome representation of the CFA data) can be used conveniently for the judgment of the flat field correction because different illumination in the channels is balanced. the debayered modidified MF does not show a strong color cast. There was, however, a slight color gradient visible: left side greenish, right side reddish. The modified MF does not cause a color cast when used for the flat field correction.


4) Influence on SNR
The most important question was whether the SNR of the end result is influenced by applying a normal MF (with different illumination of channels) to OSC data. In the Cloudy Night thread the assumption of some people was that the not modified MF would degrade SNR during integration (if I understood it correctly). I did not find such an effect when I compared an integration of light frames calibrated with either MF and would like to hear the arguments of the experts.


Bernd

3
General / Processing Images in an Image Container
« on: 2019 July 14 01:26:47 »
I wanted to process a bunch of images (sample format: 16-bit unsigned Integer) with PixelMath and force the output images to the sample format 32-bit IEEE 754 Floating point. To achieve such a sample format conversion in PixelMath with a single image, I would have to set Destination/Create new image, Sample format: 32-bit IEEE 754 Floating point. However, when I do this and process an Image Container, the images that I want are created and displayed, but the unchanged images are saved. On the other hand, ImageContainer does not have output hints where I can set the output sample format.

So how can I achieve what I wanted to?

Bernd

4
PixInsight 1.08.06.1457
Subframe Selector 1.4.3.27 (bugfix on 29th January 2019)
Windows 7 SP1

The new Subframe Selector process does not show the Measurement Window but only a black rectangle.

The Subframe Selector Script is working fine.

Bernd

5
Gallery / NGC 2023 and neighborhood
« on: 2018 February 07 08:08:15 »
NGC 2023 and neighborhood (IC 431, IC 432, NGC 2024, IC 435, IC 434 and B 33) is presumably one of the most frequently captured regions of the sky - and for me one of the most beautiful ones as well.

Takahashi FSQ 106N
Canon EOS 600D (Baader Corrector Filter BCF-1)
ISO 800
118 x 5 min = 9,83 h
14.,18. and 21. December 2017
Tijarafe, La Palma

at 85 % (4344 x 2858 px)

https://www.dropbox.com/s/oqyct1jlnp5dagi/NGC2023_D.jpg?dl=0


6
General / For beginners: Guide to PI's ImageCalibration
« on: 2017 December 27 11:03:48 »
Guide to PixInsight's ImageCalibration (revised on 26th February 2018, complemented on 10th April 2019 and 6th July 2019)

I observe that many PixInsight newcomers struggle with ImageCalibration and don't get on. Similar questions are put in the forum again and again. Whereas finally the questioner may have gotten the desired solution, unfortunately there is often no feedback given. Then the thread is open-ended and comes to nothing. Such threads are not helpful at all for beginners.

So this guide was generated with the goal in mind to provide a general help about the usage of PixInsight's ImageCalibration for novices. Please don't answer to this post. When you feel that an important point is missing or there is something wrong, please send me a private message. If reasonable I will supplement or correct my description.

Please, keep in mind: PI's ImageCalibration is very powerful and flexible, but it does not execute many checks whether your settings are reasonable. Some settings will even yield wrong results. The old wisdom applies here: "garbage in, garbage out". You in person are responsible for the right settings. There is no reason to put a bug report about ImageCalibration when your calibration result looks strange - just take a look at your settings.

The goal of well calibrated light frames can be achieved in different ways. That is the reason why you may read different recommendations for the preparation of master calibration frames and the calibration procedure. Since there is a wealth of different cameras, some of these recommendations may work well for one configuration and fail for another. It was my goal to describe a procedure that (hopefully) will work generally. Therefore, my recommendations may differ from approaches recommended elsewhere.

In my description I intentionally do not mention Overscan calibration because this is a specialty of some dedicated astro cameras. I guess that people who use such cameras know what they are doing.

--------------------------------------------------

1 General Settings
1.1 Digital Single Lens (DSLR) cameras
If you use a DSLR camera which is able to save the data in a proprietary raw format (e.g. Canon's CR2, Nikon's NEF or Sony's ARW [and other formats]), set your camera to use the raw format. In this case, set RAW format preferences (Format Explorer, double click on 'RAW') to 'Pure raw'. In the case of data of a DSLR camera the Color Filter Array (CFA) pattern (e.g. RGGB or GBRG) has been witten to the file header and is available for the Debayer process.

1.2 One Shot Color (OSC) cameras
The camera driver provides the data in FITS format. You should check once that the image that is displayed in PI is true sided. If not, change the setting 'Coordinate origin' in PI's Format Explorer|FITS. The default is 'Upper left corner (Up-bottom)'.
However, watch out: if you change this setting, all calibration master files will have to be newly generated with the very same setting as well in order that the calibration will yield correct results.
The CFA pattern is not necessarily saved in FITS files. Some acquisition software writes the FITS keyword 'BAYERPAT' which is used by PixInsight as well. If this keyword is not written to the FITS header, you have to explicitely specify the CFA pattern when debayering (see camera handbook).

1.3 DSLR and OSC cameras: using of raw CFA data for calibration
If you use a DSLR or a OSC camera you want to perform the entire calibration process with raw CFA data. Only after the calibration process (including CosmeticCorrection) is completed, the calibrated light frames are debayered (to yield RGB images), then registered and finally integrated.

1.4 Monochrome cameras
If you use a monochrome camera, separate flat frames (and MasterFlats) have to be generated for each filter.

1.5 Acquisition software and file format
Use the same software for the acquisition of light frames and calibration frames, and the same file format for all frames.


2 Conditions for the Acquisition of Calibration Frames
2.1 Temperature
For cameras without temperature control: try to take the dark frames at the same ambient temperature as the light frames. For cameras with temperature control: use the same set value for all frames.

2.2 Camera settings
Use the same camera settings for the acquisition of dark frames that were used for the light frames. Relevant settings for DSLR cameras are: ISO and exposure time, for astro cameras: gain, (if applicable) offset, and exposure time.
Also, for flat frames and the frames used for the calibration of the flat frames, identical settings should be used.

2.3 Unchanged light path for the acquisition of flat frames
For flat frame acquisition it is all-important to have an unaltered light path, i. e. the same flattener or reducer, the same camera orientation (rotating angle) and focus position as when taking the light frames. Best is not to change anything and take the flat frames directly before or after the light frames. With refractors it is usually possible to use one MasterFlat for some longer time.


3 Generation of the Master Calibration Frames
It is advisable to prepare (according to [1]) and then check the master calibration frames before the light frame calibration is executed.

3.1 MasterDark and MasterBias
It is assumed that the MasterDark and MasterBias are prepared according to [1].

3.2 MasterFlat
It is assumed, that the MasterFlat is prepared by calibrating the flat frames and integrating the calibrated flat frames according to [1].
Depending on equipment and the used method of flat frame acquisition, some people consider it favorable not to use the option of 'Dark frame optimization' in the calibration of the flat frames. Instead of the MasterDark with dark frame optimization they will use either
- a MasterFlat-Dark without dark frame optimization or
- a MasterBias.
In case of long exposure time of the flat frames (e. g. with narrow band filters) or a camera with high noise or with "amplifier glow", a MasterFlat-Dark may be favorable. In case of short exposure time of the flat frames and a camera with low noise, without "amplifier glow", a MasterBias will do well. Just try, which approach is best in your case.


4 Potential pitfalls in calibration
4.1 Correct subtraction of the bias level
In [3], Juan Conejero states:

Quote
Most [dark frame optimization] problems [(such as 'no correlation')] happen because the bias level ist not correctly subtracted from one or more calibration frames; usually from the master dark frame.

I agree, but would not confine this statement to the usage of dark frame optimization - it holds generally. In the same thread, Juan specifies the three possibilities of subtracting bias correctly:

Quote
(1) You have simply integrated the individual dark frames to generate the master dark frame. In this case the master dark frame does have a bias pedestal. This is the most usual procedure, and also the one shown in Vicent's tutorial.

(2) After (1), you have calibrated the master dark frame with ImageCalibration to subtract the master bias frame. In this case the master dark frame does not have a bias pedestal.

(3) You have calibrated the individual dark frames by subtracting the master bias frame from each of them, before integrating them to generate the master dark frame. In this case the master dark frame does not have a bias pedestal, as in (2). This procedure is atypical.


Possibility (3), which Juan denotes as 'atypical', is the cumbersome version of (2): being mathematically equivalent to (2), it will produce identical results. There is just some (unnecessary) arithmethic involved.

With both (2) and (3), the subtraction of bias is executed in a preliminary step. I will denote this procedure as 'Pre-calibration'. This approach suffers from a serious drawback with modern DSLR and CMOS cameras: in the calibrated MasterDark severe clipping (truncating of negative values) occurs. The cause of this is explained in the following sections.

4.2 "Dark Current Suppression" and its impact for calibration
Modern DSLR or CMOS cameras have a dark current suppression mechanism (in the hardware) that subtracts dark current. Thus dark frames and bias frames have similar average intensities, almost independent of the exposure time of the dark frames! Subtraction of the MasterBias from the dark frames (or from the MasterDark) results in negative values for a good portion of the pixels. If this subtraction is carried out in a preliminary step, all negative values will be truncated (clipped), and these data are lost. I experienced this situation both with a calibrated MasterDark of my Canon EOS 600D (= Rebel T3i) and as of late with a calibrated MasterDark of my brand-new ZWO ASI294MC Pro. Image 1 shows a screen dump of the histograms: on the left side the not pre-calibrated MasterDark, on the right side the pre-calibrated MasterDark (here: of a Canon EOS 600D = Rebel T3i), in which about half of the peak is truncated (set to zero). If such a pre-calibrated MasterDark is used for light frame calibration, only about half of the pixels in the light frame are calibrated correctly. The rest of the pixels (in this example: the other half) is not corrected for dark current at all; only the bias is subtracted here! The consequence of using such a pre-calibrated MasterDark is higher noise in the calibrated lights.

4.3 Truncation of negative values during a calibration process
Why are negative values simply truncated?
In [4], reply #2, Juan Conejero explained in detail the necessity of truncation of negative values in the calibration process. In short: for the calibration process a coherent data set must be used. However, the truncation of negative values is NOT applied to intermediate results of a calibration step, but rather
Quote
Truncation to the [0,1] range is carried out as the very last step of the calibration task for each frame, i.e. after overscan, bias, dark, flat and pedestal correction.

In [4], reply #2, Juan Conejero suggested the addition of a pedestal in order to avoid the issue of data loss by truncation of negative values during calibration. However, this is unnecessary. As you will see in section 5.1, there is no need to use pedestals when you
1) don't precalibrate neither dark frames nor the MasterDark and
2) calibrate the master dark (if necessary) only during light frame calibration.

4.4 Pre-calibration of the MasterDark?
This means: When the MasterDark is pre-calibrated, severe clipping occurs in the calibrated MasterDark. However, when using PI's option of calibrating the MasterDark only during light frame calibration, negative values in the intermediate result (the calibrated MasterDark) will not be clipped.


The bottom line from sections 4.1 to 4.4 therefore is:
NEVER pre-calibrate your dark frames, neither dark subframes nor the MasterDark. If necessary, use instead PixInsight's option of calibrating the MasterDark during calibration of the light frames.

--------------------------------------------------
There are tutorials that suggest to pre-calibrate dark frames or the MasterDark AND don't even give a reference to truncation of negative values (even the otherwise excellent tutorial [5] does). This is bad advice. For modern DSLR or CMOS cameras which apply dark current suppression such a calibration approach will result in a lower SNR of the calibrated light frames. Fortunately better instructions can be found, e.g. [6] and [7].
--------------------------------------------------


5 Light Frame Calibration with PI's ImageCalibration
Select the ImageCalibration process and load the light frames by 'Add Files'.

5.1 Dark frame optimization
Once you decided to not pre-calibrate the MasterDark, things become quite simple. In fact, for light frame calibration there are only two settings left, that will lead to a correct subtraction of the bias level. These are the two cases which have to be differentiated:

Case 1: Calibration WITH dark frame optimization
The general form of the applied calculation is represented in equation {1}:

Code: [Select]
Cal1 = ((LightFrame - MasterBias) - k0 * (MasterDark - MasterBias)) / MasterFlat * s0   (WITH dark frame optimization) {1}
where k0: dark scaling factor,
      s0: master flat scaling factor (= Mean of the MasterFlat)

Conclusion: The only problematic term (concerning negative values) in equation {1} is: (MasterDark - MasterBias). When the calibration of the MasterDark is executed only during light frame calibration, the term (MasterDark - MasterBias) is an intermediate result, accordingly negative values will not be truncated. For the calibration of the light frames with dark frame optimization, use the settings in Image 2, right side.

Settings:
a) A MasterBias is needed, so check section 'Master Bias' and select the MasterBias.
b) Check section 'Master Dark' and select the MasterDark. Check both options 'Calibrate' and 'Optimize'.

Note that in dark frame optimization PI uses neither temperature nor exposure time to evaluate k0. PI optimizes k0 for lowest noise in the resulting calibrated light frame [2].


Case 2: Calibration WITHOUT dark frame optimization
Without dark frame optimization, k0 equals 1.0, therefore MasterBias is canceled out from equation {1}. Thus equation {1} is simplified to equation {2}:

Code: [Select]
Cal2 = (LightFrame - MasterDark) / MasterFlat * s0                      (NO dark frame optimization) {2}
Conclusion: The term that could be problematic concerning negative values, (MasterDark - MasterBias), does not appear in equation {2}. There is no MasterBias at all. For a light frame calibration without dark frame optimization a MasterBias is not needed, because the MasterDark is not calibrated at all. Accordingly, a truncation of negative values cannot occur. For the calibration of light frames use the settings in Image 2, left side.

Settings:
a) Neither bias frames nor a MasterBias is needed, so uncheck section 'Master Bias'.
b) Check section 'Master Dark' and select the MasterDark. Uncheck both options 'Calibrate' and 'Optimize'.


Whether it is favorable to use dark frame optimization or not depends on the camera. Using dark frame optimization for a camera without temperature control may greatly improve the calibration result. This is because temperature deviations between dark and light frame acquisition are unavoidable in this case. For a camera with temperature control the benefit will be much lower, possibly even not detectable.

However, if the camera shows "amplifier glow", this might not be calibrated out completely with dark frame optimization enabled. In cases of a not temperature controlled camera with "amplifier glow" it's worthwhile to accurately compare the results of both settings.

5.2 Section 'Master flat'
Settings:
Check the 'Master flat' section and select your MasterFlat. Since the flat frames are already (according to [1]) calibrated and then integrated to a MasterFlat, uncheck 'Calibrate' in the 'Master flat' section.

5.3 ImageCalibration's output to Process Console
After having adjusted the necessary settings for light frame calibration, the process can be executed. During or after the execution, observe the output to the process console. The output may look like the following example (extract):

--------------------------------------------------
Applying bias correction: master dark frame ...

Dark frame optimization thresholds:
Td0 = 0.00179381 (733282 px = 4.068%)
Computing master flat scaling factors ...
s0 = 0.093233                           (1) <==
...
Writing output file: F:/Astro/171217/Calibrated/Light_360SecISO800_092718_c.xisf
Dark scaling factors:
k0 = 0.985                              (2) <==
Gaussian noise estimates:
s0 = 1.687e-03, n0 = 0.911 (MRS)
...

--------------------------------------------------

(1):
The master flat scaling factor s0 is the mean of the MasterFlat.

(2) (Only when dark frame optimization is enabled):
k0 is the dark scaling factor, optimized for lowest noise in the resulting calibrated light frame.

5.4 Approved settings for light frame calibration (examples from personal experience)
Canon EOS 600D = Rebel T3i (no temperature control, no "amplifier glow"):
Using this DSLR without temperature control, the result of the light frame calibration was greatly improved by using dark frame optimization. Because the 600D doesn't show "amplifier glow" in the dark frames, the MasterBias was used for flat frame calibration.

- Take dark frames, bias frames and flat frames,
- integrate the dark frames to a not pre-calibrated MasterDark,
- integrate the bias frames to a MasterBias (and, if applicable, make superbias),
- calibrate the flat frames with the MasterBias (or superbias) and integrate to a MasterFlat,
- calibrate the light frames using the settings shown above in Image 2, right side.


ZWO ASI294MC Pro (with temperature control, with "amplifier glow"):
Using this CMOS camera with temperature control, I did not notice an improvement of the light frame calibration result by using dark frame optimization. The "amplifier glow" tended not to calibrate out completely with dark frame optimization enabled. So I decided not to use it. The "amplifier glow" as well was the reason for using a MasterFlat-Dark for flat frame calibration.

- Take dark frames, flat-dark frames and flat frames,
- integrate the dark frames to a not pre-calibrated MasterDark,
- integrate the flat-dark frames to a not pre-calibrated MasterFlat-Dark,
- calibrate the flat frames with the MasterFlat-Dark and integrate to a MasterFlat,
- calibrate the light frames using the settings shown above in Image 2, left side.


6 After Light Frame Calibration
6.1 CosmeticCorrection
The calibration will leave some hot pixels in the calibrated light frames. This is normal because hot pixels don't behave ideally. These remaining hot pixels must be corrected with the CosmeticCorrection process directly after completed calibration process.

6.2 Debayer
In case of a DSLR or an OSC camera, the calibrated light frames have to be debayered now, thus yielding RGB images. For OSC cameras, the CFA pattern might have to be specified explicitely (see 1.2).

6.3 Registration
The images are then registered by the StarAlignment process.

6.4 Integration
Finally, the registered images are integrated by the ImageIntegration process.

--------------------------------------------------
Note for DSLR or OSC cameras (added on 10th April 2019):

The debayering is an interpolation algorithm which may introduce certain kinds of artifacts that may show up even in the integration result. If you capture a sufficiently large number of light frames and dither enough between light frames, it is worth trying to apply a method called "CFA drizzle". This is the approach recommended by Juan Conejero [10]. It has two important advantages over debayer interpolation: the resolution of the integration result will be higher and interpolation artifacts are avoided.

When using CFA drizzle, the workflow after calibration of the light frames must be modified slightly:
Steps 6.1 Cosmetic Correction and 6.2 Debayer remain the same. In steps 6.3 Registration and 6.4 Integration, the option 'Generate drizzle data' has to be checked. Subsequently an additional step, 6.5 Drizzle Integration, is necessary. The option 'Enable CFA drizzle' has to be checked here. The path to the calibrated light frames has to be input under 'Format hints/Input directory', if they reside in a directory different from that where the Drizzle data files are.
--------------------------------------------------

Thereby, data reduction has been completed.

==================================================

By now I have searched in the PixInsight forum and found the following interesting threads ([8] and [9]) that I was not aware of previously. So I discovered that my recommendations are anything but new:

The forum user Ignacio used and suggested the approach with dark frame optimization (Case 1, see [8] and [9]), and the forum user astropixel used and suggested the approach without dark frame optimization, referred to as 'bias_in_the_dark (BITD) method' (Case 2, see [8]). This all was written already in 2014!

==================================================


--------------------------------------------------

Sources:

[1] Tutorial by Vicent Peris: "Master Calibration Frames: Acquisition and Processing"
https://www.pixinsight.com/tutorials/master-frames/

[2] Juan Conejero: "Dark Frame Optimization Algorithm"
https://pixinsight.com/forum/index.php?topic=8839.0

[3] Juan Conejero in: "Warning: No correlation between the master dark and target frames", Replies #2 and #4
https://pixinsight.com/forum/index.php?topic=3286.0

[4] "Image Calibration and negative values (KAF 8300)", https://pixinsight.com/forum/index.php?topic=6822.0

[5] Kayron Mercieca, "2. Generating a Master Superbias and a Master Dark" and "4. Calibrating Lights and Correcting Hot and Cold Pixels"
http://www.lightvortexastronomy.com/tutorial-pre-processing-calibrating-and-stacking-images-in-pixinsight.html#Section2

[6] David Ault, "PixInsight Manual Image Calibration, Registration and Stacking"
http://trappedphotons.com/blog/?p=693

[7] Warren A. Keller, "Inside PixInsight", Springer (2016)

[8] "Preprocessing Canon DSLR frames - a different approach", https://pixinsight.com/forum/index.php?topic=7006.0

[9] "Bias frames - pixel rejection - temperature regulated DSLR", https://pixinsight.com/forum/index.php?topic=6980.0

[10] "Bayer drizzle instead of de-Bayering with OSC", https://pixinsight.com/forum/index.php?topic=13504.msg81540#msg81540 and
"Additional Interpolation Methods in Debayer Module", https://pixinsight.com/forum/index.php?topic=13549.msg81718#msg81718

--------------------------------------------------

7
General / Dark Frame Optimization: Optimization Threshold
« on: 2017 December 14 04:28:59 »
I searched in this forum for the term "Optimization Threshold" and found the thread

https://pixinsight.com/forum/index.php?topic=10123.0

The author of this thread got the message "Warning: No correlation between the master dark and target frames (channel 0)" and wanted to know what this means and how to prevent it. In Reply #3, Vicent Peris suggested to do a PixelMath operation on the MasterDark and then report the statistics to him. Sadly the author was not responsive, and the thread came to nothing.

I am unsure as well about the correct setting of this parameter in the ImageCalibration, section MasterDark. Also it is
unclear to me, whether the default setting (= 3, in units of sigma) is suitable for my configuration. So I performed the suggested PixelMath operation
Code: [Select]
iif($T > (med($T) + adev($T) * k), $T, 0)on one of my typical, not precalibrated MasterDarks and want to present the result here, hoping that an explanation will be given. The MasterDark is integrated from 40 dark frames of a DSLR (Canon EOS 600D), ISO 800, exposure 360 s, camera temperature 24 °C, ambient temperature 11.0 -> 9.4 °C in the course of the dark frame acquisition.

Vicent, if you read this, please answer my following questions:

1) For the data I supplied, what would be the correct setting of the optimization threshold?
2) Up to now I always left the optimization threshold at the default value of 3. Is it expected to get better results when I change this value?
3) Is the default setting of the size of the optimization window (1024 x 1024 px) suitable for my camera (w=5202 h=3465)?
4) How can I determine whether the quality of my MasterDark is sufficient (number of dark frames, constancy of temperature, ...)?

Bernd

8
Image Processing Challenges / Annoying background brightening
« on: 2017 July 05 08:35:34 »
Hello,

I have an issue with the processing of images of the Helix nebula (NGC 7293). These have been shot with a Takahashi FSQ 106 N and a Canon 600D (modified). The camera is connected via a CA-35 (TS 102) adapter and a Wide T-Mount adapter (with Canon bayonet joint) to the refractor.

At first I guessed that a bad flatfield correction (with sky flats) was the cause, but when the flatfield correction with a LED flatfield box produced the same result, I rule that out. This is the integration of 104 light frames (360 s each) after calibration (MasterBias, MasterDark, MasterFlat; the flat frames have been calibrated with a flat-dark before integration), debayering, aligning and cropping (because of dithering). It's the not further processed, linear image, auto-stretched by Pixinsight:

http://ge.tt/4CeMwfl2
(Image 1: NGC 7293)

In the outer region of the image you see a brightening of the background, mainly in the red channel (about oval ring-shaped) but also in the green channel (at top and bottom, pulvinated). The blue channel is flat.

I suspected that the cause of the brightening is a reflection and illuminated into the refractor with a small LED lamp:

http://ge.tt/4CeMwfl2
(Image 2: Reflex)

Clearly visible is a ring-shaped reflection which is caused by the Wide T-Mount adapter.

http://ge.tt/4CeMwfl2
(Image 3: Wide T-Mount adapter)

I can imagine that the relative bright object NGC 7293 near the image center also produces a reflection that in turn is reflected by one of the lenses in the direction of the sensor, but maybe there are other possibilities.

My questions:
1) What is the cause for the brightening in the red and green channel?
2) How can I eliminate this cause?
3) How can I correct the present images?

By now this result puts me off astrophotography thoroughly. I'd really appreciate your ideas.

Regards, Bernd

9
In my first private JavaScript "project" I wanted to retrieve some specific metadata from the raw camera file (CR2), metadata that cannot be found in the XISF file. So I wrote a script that reads the whole CR2 file, searches for the Image File Directories (IFDs), gets the Exif tags and retrieves some data, e. g. the camera temperature. Apparently you can read such a file only into a ByteArray, so I did it. Then, in order to get the data from the ByteArray, I programmed the data fetching for each data type that a CR2 file may contain: byte, word, long, rational each signed/unsigned; single and double float, character and string and the conversion of APEX values (Shutter Speed and Aperture).

OK, the script works, but I am convinced that if anybody ever had a look at that code, he would judge that this is not JavaScript. Part of the code (especially the data fetching) looks rather circumstancial and I ask me, whether one is able to achieve the same result more elegantly and also performing much better. (Whereas in this case performance was not an important criterium, I have other projects in mind where performance is absolutely crucial.) Maybe JavaScript might not be the best choice of programming language to realize this 'close to bits and bytes' project, maybe I am simply missing the obvious.

Here the first part of my questions:

1) In the Object Explorer some Core JavaScript Objects are marked with a ball, others with a pyramid or tetraeder - what do the different symbols mean?
2) Is it possible to read directly from a file into a typed array instead into a ByteArray (reason: performance of subsequent data processing)?
3) Array, ByteArray, typed Arrays; is it possible to convert one into another and how? Or is there actually no need for such conversions because in JavaScript there are different ways to achieve that?
4) Is it possible to retrieve data from a ByteArray in a more elegant way than I did or is it really necessary to program a function for each data type?
5) The ByteArray method "buffer.insert()" throws an error, when the Byte Array is created by: "var buffer = new ByteArray();". However, it works without error with "var buffer = new ByteArray(1);". But then one has to remove the last byte from the ByteArray in order to get the correct ByteArray.length. So why is it not possible to insert into an empty ByteArray? Is this a peculiarity of JavaScript?

Please consider that I am totally unexperienced with object oriented programming, so some questions may seem rather dumb. I would also appreciate replies like "the answer can be found here: (link)".

Bernd

10
When in preferences of Format Explorer/XISF I adjust "Import FITS Header Keywords as Properties", on trying to open a XISF file I get the error messages "Invalid Image property identifier 'FITS:DATE-OBS': ...." and "Unable to open file: ....". PixInsight obviously refuses '-' as not allowed character. When I replace '-' by '_' in the file, everything is working as expected.

Bernd

11
PCL and PJSR Development / Access to metadata from PixInsight
« on: 2017 April 04 07:08:54 »
In an attempt to learn more about JavaScript programming in PixInsight I had a look at the Script "BatchFormatConversion". I did not understand the following code in function FileData:

Code: [Select]
if ( outputFormat.canStoreMetadata && instance.format.canStoreMetadata )
   this.metadata = instance.metadata;
else
   this.metadata = undefined;

"instance" is the FileFormatInstance that is returned by function "this.readImage".

Currently the properties "FileFormat.canStoreMetadata" and "FileFormatInstance.metadata" do not exist according to the documentation in Object Explorer. I confirmed this by I inserting the following lines:

Code: [Select]
console.writeln(instance.format); // output: instance.format = XISF
console.writeln("instance.format.canStoreMetadata exists?\t", ("canStoreMetadata" in instance.format)); // output: instance.format.canStoreMetadata exists? false
console.writeln("instance.metadata exists?\t", (metadata in instance)); // output: instance.metadata exists? false

My question is: how does PixInsight offer a way to access the metadata?

The View Explorer shows e. g. the following 8 properties:

metadata
properties
   XISF:BlockAlignmentSize
   XISF:CreationTime
   XISF:CreatorApplication
   XISF:CreatorModule
   XISF:CreatorOS
   XISF:LoadTime
   XISF:MaxInlineBlockSize
   XISF:ResourceURL

LoadTime and ResourceURL are not stored in the XISF header but are generated during FileFormatInstance.open. The remainig 6 properties are fetched from the XISF header of the file.

Because View Explorer shows these properties I guess Pixinsight offers a way to access the metadata. The property identifiers of all 8 properties are accessible by fileFormatInstance.properties. However I only got the property identifiers and was not able to access the property types and property values in this way. I tried FileFormatInstance.readProperty(String id), but an access violation was the result.

How can I manage to get type and value of the properties?

Bernd

12
Is there a reason that the 'less' character ("<") will not be outputted by "console.writeln"? This contrasts to the behavior of "console.log" in the JavaScript environment of Firefox.

What are further differences between "console.writeln" and "console.log"?

Bernd

13
Evaluation and visual representation of three-dimensional images

With growing interest I read a paper about an actually dry subject, the XISF Version 1.0 Specification (DRAFT 9.3), http://pixinsight.com/doc/docs/XISF-1.0-spec/XISF-1.0-spec.html Especially the topic of three-dimensional images aroused my interest:

In section 5, "Overview of XISF 1.0 Features", the dimensionality and the number of channels of an image are defined as distinct conceptions. In section 8.5.1 "Structure and Properties" the concept is explained in more detail, and Figure 1 - "Structure of a two-dimensional image" illustrates the case of a two-dimensional image. The second paragraph of section 8.5.2 clearly defines three-dimensional images and section 11.5.1 among other things deals with the image attribute 'geometry'.

According to this specification Pixinsight is (or will be?) ready to represent three-dimensional images internally, but as far as I am aware of, Pixinsight by now does not make use of it. I would like to have advanced processes in Pixinsight enabling to open, save, evaluate and visually represent the data of three-dimensional images. Such functionality is commonly applied in image processing in the field of e. g. medicine or biology. However, also in astrophotography one can view a stack of dark frames or a stack of light frames as a three dimensional image and I think it's worthwhile giving thought to it:

As an example, in the examination of a camera sensor I needed functionality that would let me examine a specific hot pixel (x, y) and output (as chart and as numerical data) the intensity as function of the depth (z coordinate) in a stack of dark frames.

Second example: imagine a stack of images as a cuboid. Normally, as in the case of the blink process, we look at it in a way that we see the (x, y) image of a specific z coordinate. The x axis is the direction of rows and the y axis is the direction of columns of the image. Even if blink treats the inspected images as separate, they altogether could be considered as one three-dimensional image. When we scroll through them, we virtually change the z coordinate. Now I would like to be able to (graphically) turn the cuboid in space and view it from a different direction. So you could visually represent two-dimensional (x, z) or (y, z) images and scroll through the y coordinate or the x coordinate, respectively. In this way one could e. g. rapidly and unambiguously identify cosmic ray hits.

Juan Conejero wrote in topic 4529 (August 2012): "PixInsight is an image processing and analysis platform specialized in astronomy and other technical imaging fields." Advanced functionality regarding three-dimensional images would facilitate specific evaluations and visual representations in astronomy and make Pixinsight more interesting for a broader public (potential users in other scientific fields) too.

I guess that internally in Pixinsight the needed processes are already at hand. Unfortunately my programming skills are not sufficient, otherwise I now had an interesting programming project for a JavaScript. In any case, I am curious about the further development of Pixinsight and specifically about the support of three-dimensional images. Perhaps the developers can take the time for commenting on their point of view?

Bernd

14
General / Dark frame optimization and sensor defects
« on: 2017 February 21 09:18:42 »
In order to get the highest SNR from the data I tried different image integration parameters and compared the results. (My general set-up: non cooled Canon 600D, light frames ISO 800, 360 s at 14 °C; 40 dark frames ISO 800, 360 s at 16 °C; MasterBias from 40 bias frames ISO 800, 1/4000 s frames at 20 °C; DSLR_RAW: Pure Raw, Raw CFA; no flat frames). Best results were obtained with the following calibration adjustment:

1) Integration of dark frames, NO subtraction of MasterBias (a subtraction of MasterBias prior to the calibration of light frames resulted in severe clipping of data),
2) Calibration with MasterDark (Calibrate: checked, Optimize: checked) and MasterBias (Calibrate: not checked).

Well, every sensor contains sensor pixels that cannot provide useful information, resulting in "hot pixels". My question concerns such defective pixels and the calculation of the scaling factor k0:

Are the data from defective pixels used in the calibration process when calculating k0?

If this is the case, it ought to be changed, because the defective pixels deviate the most from linearity. My proposal:

The normal workflow assigns for cosmetic correction as the next step after calibration. I propose to evaluate a defect list (or defect map) already at the stage of calibration, that is: set the difference (MasterDark - MasterBias), that is computed during calibration, to ZERO for defective pixels. In this way it would be avoided that the calculation of the scaling factor k0 is falsified by data from defective (non-linear) sensor pixels.

Bernd

15
General / PixelMath: advice needed
« on: 2017 January 27 07:07:55 »
In an investigation of Biasframes and Darkframes I need advice how to achieve the following:

1) Transfer an image with width = w and height = h to an image with width = w and height = 1, where the pixel values of the new image are the mean (or median, or stdDev, ...) of the corresponding column in the initial image.

2) Transfer an image with width = w and height = h to an image with width = 1 and height = h, where the pixel values of the new image are the mean (or median, or stdDev, ...) of the corresponding row in the initial image.

Is this already feasible with PixelMath or is it a candidate for the Wish List?

Bernd

Pages: [1] 2