Hi John,
Nice improvement to the original script. I assume that by "Vicent's method of narrowband combination" you refer to
this article. In his article, I think Vicent made a remarkable contribution with the concept of continuum map. Vicent's approach is based on synthesizing a broadband image from the narrowband data. This is an open line of development that we have not yet exploited. We have so many projects, and unfortunately we have so little time and human resources...
Ha <- (Ha*RGB_bandwidth - RGB*Ha_bandwidth )/(RGB_bandwidth - Ha_bandwidth)
I'm not sure what the intent is with this expression. Essentially, the continuum map is the quotient between the broadband and narrowband images. This is basically a flat fielding operation, where the narrowband image acts like a model of the attenuation applied by the narrowband filter. Put in a more understandable form, the continuum map allows us to isolate just the continuum emission data in the image. So the continuum map in this case would be something simpler such as:
CM = NR( k*Red / Ha )
where k acts like a compensation factor in case it is necessary (k=1 by default), and NR() is a noise reduction operator. Once we have a CM we can synthesize a "clean" broadband channel using Vicent's iterative method, as described in the article. See Figure 8 for example. The crux of this method is avoiding the mix of narrowband and broadband data, which leads to unnecessary noise transfers.
Now the problem with this method is that it is sound for Ha because the red channel can be used as Ha's broadband counterpart with a physical basis, but what happens with OIII, SII, etc? It remains unclear how to build a continuum map for these filters in practice; as I said above this is an open line of development.
R <- R+(Ha-med(Ha))*HaMultiplier
This simply mixes Ha and red data. By subtracting the median, it is removing Ha's mean background. By multiplying by a scaling factor, it is controlling the amount of Ha that enters the mix. I don't know if anything else was intended, and I don't know also why the median is being subtracted this way.
Ha <- LinearFit(Ha with R)
This is equalizing the Ha and red images using a linear model, taking red as reference. The linear model assumes that every pixel in both images can be expressed as:
v = B + k*S
where v is the pixel value, B is the mean background, k is a scaling factor and S is the pixel's signal. For correctly calibrated and gradient-free linear images, this is a good model.
Despite these complexities, your version of Silvercup's original script is great and can be very useful. I encourage you to further develop it. Once we test it with 1.8 RC7, it would be a nice addition to the final 1.8.0 release. Just a few suggestions regarding the interface:
- Simplify/organize input controls. Instead of the current stack of "Source NB to ..." items, consider defining three GroupBox controls for the R, G, B narrowband images.
- I would label "Scale" instead of "Multiplication factor".
- When a narrowband component isn't selected, the corresponding numerical items should be disabled.
Also I would rethink the suitability of ColorCalibration. For a narrowband combination, the broadband data should be pre-calibrated.