PixInsight 1.6.1 - New DefectMap Tool

Juan Conejero

PixInsight Staff
Staff member
PixInsight 1.6.1 introduces a new image calibration tool: DefectMap. This tool allows you to fix bad pixels (e.g. hot and cold pixels) by replacing them with appropriate values computed from neighbor (valid) pixels. DefectMap has been authored by PTeam member Carlos Milovic, and has been released as an open-source tool pertaining to the standard PixInsight ImageCalibration module, under GPL v3 license. The full source code of the ImageCalibration module is included in all PCL standard distributions.

DefectMap01.jpg

DefectMap provides three working parameters:

Defect map

This is the identifier of a view that will be used as the defect map of the process. In a defect map, black (zero) pixels correspond to bad or defective pixels. Bad pixels will be replaced with new values computed from neighbor pixels. When using convolution replacement operations (Gaussian, mean), non-zero pixel values represent the pixel weights that will be used for convolution. Morphological replacement operations (minimum, maximum, median) always use binarized (0/1) map values.

Operation

Bad pixel replacement operation. This parameter specifies the convolution or morphological operation used to replace bad pixels with new pixels computed from neighbor values. Mean and Gaussian will be applied by convolution; minimum, maximum and median will be applied as morphological operators.

Structure

This parameter specifies the shape of a structural element used to apply a convolution or morphological transformation for bad pixel replacement. The structural element defines which neighbor pixels will be used to compute replacement values.
 
Hi Nikolay,

No, I cannot see how a Defect Map - in a general case - could also be made to cater for an image that has not yet been de-Bayered.

Consider a single-pixel defect in an OSC array, take a Red pixel, for example, which is behaving like a 'Dead' pixel. If you were to 'synthesise' the 'Rd' ADU value for this pixel from 'nearest neighbours', but tried to do this PRIOR to deBayering, the nearest 'Rd' neighbours would not be found 'one pixel away' (because these would ALL be either Gn or Bu pixels), but would be AT LEAST 'two pixels away' (almost 3 pixels in the case of the 'corner neighbours').

I would have thought that this would lead to a very 'poor' interpolation for the missing data. That said, synthesis of the Gn or Bu data (in the case of this example pixel location) would not be 'too bad' - after all, even if the pixel was NOT DEAD, there still would not be any Gn or Bu data at that site anyway - because it is under a 'Rd' filter in the first place - so it should (theoretically) behave as a 'dead' pixel for Gn and Bu anyway.

However, once you have deBayered the monochrome image, then EVERY pixel location DOES now have a one-pixel-distant 'nearest neighbour'. This means that a 'pixel defect' can now find data that is as close as one pixel away - even if that data itself has ALREADY been synthesised from one-pixel-distant 'nearest neighbours' during the deBayer stage.

At the end of the day, working with OSC images will ALWAYS be a compromise. There are arguments for and against deBayering at the 'start' of the calibration process, versus the 'end' of the process.

Personally, I feel that standard 'Calibration' (Darks, Flats, Biases, etc.) should be applied to the Bayered data first. Then the images should be deBayered, then they should be Aligned, then they should be Integrated. Only then would I consider applying a DefectMap correction.

However, I have also now been thinking - ever since the new power of StarAlignment, ImageCalibration and ImageIntegration became available in PI, and considering that all of the data can be processed at 32-bit level (minimum) - what is then 'wrong' with carrying out all of these pre-processing steps FIRST, arriving at a 'MasterLight' image that STILL contains reference to the CFA location.

If all of the images are eventually aligned to an image that was itself 'an original image' - then the aligned images must, surely, have retained their 'CFA location' data - haven't they? Or am I missing something.

If it can be said that they HAVE retained their 'knowledge' of the CFA, this would mean that the MasterLight must ALSO have retained the precise physical relationship to the CFA. And this means that the deBayer process now needs to be applied ONCE, and ONCE ONLY to the entire data set - i.e. we would no longer need recourse to a 'BATCH DEBAYER' routine.

To me, this MUST be an advantage - after all, EVERY application of the deBayer algorithm is introducing 'some noise' - where 'noise' is the ACTUAL RESULT of the 'guessing game' that takes place for every pixel that requires interpolation. Interpolation is 'guesswork' - no matter how sophisticated that 'guesswork' might be.

Now, the counter-argument against the 'single-pass' debayer application, is that it is 'more difficult' to align the non-debayered images in the first place - because the CFA 'grid' causes all sorts of 'nasties' during the alignment process. In which case I see two options:

Either,
The images are deBayered and StarAligned first - and the 'results' of the StarAlignment process are 'memorised' for subsequent re-application to the non-deBayered images

or,
The StarAlignment process can call a deBayer algorithm internally, in order to be able to process each image in turn, before finally applying the transformation to the RAW data

Of necessity, I believe, in both of these cases, it would be useful for PixInsight to be capable of storing a known deBayer matrix for a user's application.

And this then begs the question, "OK, so I can (relatively) easily define the physical relationship of the CFA to the incoming image, but what about the scaling parameters needed in order to be able to try and define some sort of suitable colour balance?"

Well, based on experience with the far more troublesome CMYG ColourFilterArray present in the likes of the DSI-IIC imagers (and others), it just isn't really that necessary - not when dealing with astronomical images in PixInsight - where we also have DynamicBackgroundExtraction (DBE), BackgroundNeutralisation (BN) and ColourCorrection (CC) to help us.

What are other people's thoughts?

Cheers,

 
Hi, Niall.
Niall Saunders said:
I cannot see how a Defect Map - in a general case - could also be made to cater for an image that has not yet been de-Bayered.
Why not? Why you cant generate Hot pixels Defect Map from CFA Master Dark? And why don't create Dead pixels Defect Map from Flat?

Also, split CFA to 4 B/W images (RGGB) and forget about RGB. Think about 4 separate B/W CCD.
So, we can process Canon images as B/W CCD.

Personally, I feel that DefectMap Tool should understand CFA pattern to avoid problem of channels mixing at problematic pixels.
 
Hi Niall

I would say that you must apply DeffectMap before registration, for two reasons:
1.- Bad pixels are always located on the same places, so, it has no sense to identify them on the result of a integration. Of course, a lot of them will be rejected and gone... but the task will be easier the other way.
2.- If you debayer a CFA image, bad pixels will "grow", like a piramid. So, if you correct that early, the effect will spread to more pixels.


Now, about Debayering and Registration... I think that a completely separate process should be make for registration/integration of CFA images, taking advantage of the knowledge of the physical properties. I think that DSS does something like that.
BTW, I'm reading a paper were the authors takes the images as continuos data, where the pixels values are just indicators of Bsplines. This approach seems very interesting, and may be useful with these CFA data...
 
Personally, I feel that DefectMap Tool should understand CFA pattern to avoid problem of channels mixing at problematic pixels.

Personally, I feel that the author of DefectMap should understand CFA pattern to avoid problems :D
 
Talking seriously, I have this running on the background... Going to the next corresponding pixels (i.e. that stores the same channel data) is not hard. But, since samples must grow a lot to compensate, I'm not that sure that the resulting value will be a good one. Of course, it will be better than a dead pixel... but I want it to be as close as it can be to the "reality".
I guess that maybe we should give the naive approach a try. After all, in the algorithm I try to keep the sample as small as I can (if you have not noticed, the size is calculated internally, for each pixel, calculating how many good pixels are in the neighbourhood).
 
I feel that the author of DefectMap should understand CFA pattern to avoid problems

Again, is this a reason for requiring PixInsight to hold an internal definition of the CFA for all user images? (Easily changeable if/when the user then decides to use data from a different source, or if they change/upgrade their imager, for example)

And, if so, will PixInsight be able to cater for the CMYG arrays still used in many imagers?

Cheers,
 
Hi.
DefectMapExtractor v1.0
Simplest script which generate DefectMap according shadow and highlight clipping sliders. :D

Of course, if you not like to use a scripts, you can do it manually via:
Code:
PixelMath iif ( (MasterDark > HotPixelThreshold) or (MasterDark < DeadPixelThreshold) , 0 , 1 )
 

Attachments

  • DefectMapExtractor_1.0.js
    8.5 KB · Views: 131
  • Capture.GIF
    Capture.GIF
    49.4 KB · Views: 112
Again, excellent work Nikolay.

Some of these little scripts do actually deserve to be 'ported over' to PCL - just to make them even easier to use. Although, that said, I haven't tried looking at how PJSR scripts can now be made to 'live alongside' your other Favourites, in the Process Explorer window.

Maybe, for scripts like these, there really is no need for further sophistication.

Thanks again.

Cheers,
 
Back
Top