well i have not read it in a long time but i believe superpixel is not interpolating pixels, just picking out the red pixels from the matrix and assigning them to the red channel, averaging the two greens together and assigning it to the green channel, and blue to blue.
that's why the image comes out smaller than other debayering algorithms - all the rest of them try to infer what the missing red, green and blue pixels would look like if they could exist. bilinear is really naive, it just averages the neighboring pixels to come up with the missing values. i think most of the rest of the debayering algorithms are variations on this theme - various weightings of the neighboring pixels that are more intelligent based on features of the scene.
the ones you'd have to watch out for are ones that try to extract luminance from the bayer matrix and then use that as a guide for reconstructing the channels. in the case of an Ha image the green channel is going to be almost completely just noise, but in normal terrestrial images the green forms the bulk of the luminance signal. clearly that will lead to bad things happening to the red channel.
i think you can also use Nikolay's SplitCFA to get your red channel data. i have used a pixel math version of his process to grab the individual green channels from the bayer matrix. i was curious if integrating them separately would lead to better results, but as far as i can tell it was not worth the trouble.
i'd have to go read the VNG papers or code again to really confirm that the channel data is kept separate, but i remember looking at it years ago and coming to that conclusion.
rob