Fixing Bad Pixel Columns

I would like to bring this thread alive. I too suffer from bad columns in 16803 CCD. The cosmetic correction function does not work well. And in general, I don't want any pixel value replacement using nearby pixels. The reason is that I hope the final stacked image can be scientifically accurate. Only good data enter the final stack, and no pixel values are invented during the process.

My initial thought is to find a way for pixel rejection to simply reject those pixels. Without any cosmetic correction, this doesn't work. The values in the bad columns aren't too different from other neighboring good pixels. They don't get rejected.

My next trial is to use pixel math to replace the bad columns with 1.0. I thought then I can use the clip range parameter in image integration to reject those 1.0 pixels. This doesn't work, because after registration, those 1.0 pixels got mixed with neighboring pixels, and their resultant values can be anywhere between 0 and 1. So this trial fails again.

Any suggestions?

Hi Wei-Hao,

You may be interested in the following article:


The LinearDefectDetection and LinearPatternSubtraction scripts are available on the standard PixInsight distribution. They can also be used from the WBPP script. The article documents these scripts and describes the implemented algorithms.
 
Hi Juan,

Thank you. That appears better than cosmetic correction. I will give it a try.

Cheers,
Wei-Hao
 
Hi Juan,

I gave it a try, but it doesn't seem to work. I tried it on calibrated images (top two windows in the attached images) and raw images (bottom windows). The left windows are before LPS and the right ones are after.

If my understanding of this function is correct, it tries to subtract something from the column. This doesn't really fix the column. The column is completely dead. The CCD only generated useless pixel values along this column. So subtracting something from it still only leads to useless pixel values.

What I am looking for is a way for image integration to completely by pass those pixels and let the dithered images to fill in the column. I don't know how to achieve this in PI.

Screen Shot 2022-06-07 at 12.13.10 PM.jpg
 
You will need to use Nearest Neighbor for your registration. This will help with respect to keeping the columns intact in the subframes. They might be shifted by the registration.. .but the values will not be interpolated. Then the rejection will be cleaner. You will need to use an appropriate rejection method. Normalization will also be important since sky values and poisson noise will surely be an issue. Finally, the combination of a low rejection threshold (just reject everything below a certain value) and large scale rejection (to "grow" your rejected pixels) can probably do what you want.
 
You will need to use Nearest Neighbor for your registration. This will help with respect to keeping the columns intact in the subframes. They might be shifted by the registration.. .but the values will not be interpolated. Then the rejection will be cleaner. You will need to use an appropriate rejection method. Normalization will also be important since sky values and poisson noise will surely be an issue. Finally, the combination of a low rejection threshold (just reject everything below a certain value) and large scale rejection (to "grow" your rejected pixels) can probably do what you want.

The problem with this, especially for wide-field images, is that nearest neighbor lacks subpixel accuracy. If the columns are completely dead the only good solution would require writing a script to replace the rotated/translated columns with white pixels. Otherwise our LinearDefectDetection/LinearPatternSubtraction scripts should work with the correct parameters.
 
The problem with this, especially for wide-field images, is that nearest neighbor lacks subpixel accuracy. If the columns are completely dead the only good solution would require writing a script to replace the rotated/translated columns with white pixels. Otherwise our LinearDefectDetection/LinearPatternSubtraction scripts should work with the correct parameters.

I agree, but with a enough frames things tend to "average" out during integration (via rejection). The funny thing is the many (though not all to the same degree) interpolation methods blur a bit of spatial information during registration. Though NN doesn't have the accuracy...sometimes it is still equivalent to some interpolated results as measured in the integrated image. Is this fair?

-adam
 
Hi Wei-Hao,

You may be interested in the following article:


The LinearDefectDetection and LinearPatternSubtraction scripts are available on the standard PixInsight distribution. They can also be used from the WBPP script. The article documents these scripts and describes the implemented algorithms.
Hi Juan,

I also suffer from defective columns with my sensors, and it is really a headache to handle them properly.

I regularly use LinearDefectDetection and LinearPatternSubtraction scripts and sometimes I have to make adjustments in parameters to obtain the best result.

In any case, I follow the indications of Vicent in the referenced article and generate first a reference image with the calibrated but unaligned frames to produce my defect list.

In the present implementation of Linear Pattern Subtraction in WBPP it is not possible to “load” such a list.

Could it be possible, please, to add this possibility?

Jordi

PS Thank you very much Roberto for the great ant continuous improvements in the script ;).
 
Thanks for the discussion. I also do not think nearest neighborhood interpolation will work, unless the image is many many times oversampled.

I am not sure if I fully understand what Juan described in reply #25, but in some sense it sounds like what I do in my professional data. There, we often employ bad pixel masks that indicate what pixels are dead. Before stacking, I assign IEEE floating point number NaN to those bad pixels. NaN is very destructive. Unlike 1.0, which can be diluted by the interpolation during registration, NaN persists and also propagate through all interpolations. (Any new pixel that is a interpolation among NaN and normal pixels will be NaN.) And then during stacking those NaN can be easily rejected without setting a numerical threshold that can also reject good data. This ensures that the bad pixels are always rejected and their effects do not propagate beyond their immediate neighbors. Their neighboring pixels will be also rejected, so some good data are gone, unfortunately. But the real advantages are that no bad data will remain and no fake data need to be created. In the final stacked image, all data are real and good. I wonder if such workflow can be implemented.
 
Back
Top