fredvanner
Well-known member
The supported algorithms all have issues for astronomical use:
bilinear: interpolation smooths raw data, suppressing local detail.
VNG: fine for photography because preserves discontinuous boundaries, but not good for astronomy - stars are surrounded by a discontiuity, so VNG smooths across the interior of stars, flattening their profile (and thus modifying their PSF - e.g. for convolution).
superpixel: discards the higher resolution available in the green channel data, and , worse, averages the green channel, losing the local luminosity maxima that contribute to detail (I think max(G1, G2) would be better for astronomy).
All three methods end up smoothing the valuable, higher resolution green channel luminosity data.
I have tried an alternative artefact-free method. It is basically a sliding superpixel window, but selecting only one of the two possible green values. Thus each pixel is constructed from exactly one R, one G and one B from the original RGGB image. Each R and B is used in four separate RGB pixels; each G is used in two RGB pixels. This process has the advantage that every RGB value corresponds to an original image value (no averaging / smoothing / interpolation, and all RGB values, including maxima, are preserved), and that that the higher resolution of the green channel is not lost in the RGB image.
I would attach an example, but the files are too large (and compressed versions, such as jpg, would defeat the object); however I attach a jpg of a sample debayered image (M51 240s ASI183MC Pro, dark subtracted).
Curently, I have implemented this with PixelMath, but as a complete PixInsight beginner, I can't work out how to put it together as a process that I can apply to lists of images (basic ProcessContainer approaches stall because I can't see how to create a new blank RGB view for the RGB merge; more flexible scripting approaches stall because I can find no documentation for the js API; any help / pointers to documentation would be much appreciated.)
bilinear: interpolation smooths raw data, suppressing local detail.
VNG: fine for photography because preserves discontinuous boundaries, but not good for astronomy - stars are surrounded by a discontiuity, so VNG smooths across the interior of stars, flattening their profile (and thus modifying their PSF - e.g. for convolution).
superpixel: discards the higher resolution available in the green channel data, and , worse, averages the green channel, losing the local luminosity maxima that contribute to detail (I think max(G1, G2) would be better for astronomy).
All three methods end up smoothing the valuable, higher resolution green channel luminosity data.
I have tried an alternative artefact-free method. It is basically a sliding superpixel window, but selecting only one of the two possible green values. Thus each pixel is constructed from exactly one R, one G and one B from the original RGGB image. Each R and B is used in four separate RGB pixels; each G is used in two RGB pixels. This process has the advantage that every RGB value corresponds to an original image value (no averaging / smoothing / interpolation, and all RGB values, including maxima, are preserved), and that that the higher resolution of the green channel is not lost in the RGB image.
I would attach an example, but the files are too large (and compressed versions, such as jpg, would defeat the object); however I attach a jpg of a sample debayered image (M51 240s ASI183MC Pro, dark subtracted).
Curently, I have implemented this with PixelMath, but as a complete PixInsight beginner, I can't work out how to put it together as a process that I can apply to lists of images (basic ProcessContainer approaches stall because I can't see how to create a new blank RGB view for the RGB merge; more flexible scripting approaches stall because I can find no documentation for the js API; any help / pointers to documentation would be much appreciated.)