So creating master mosaics of each channel and then combine as a LRGB and process from there.
That can work. However, there are some caveats. Field curvature and other field distortions can be problematic, especially with wide field images. StarAlignment has limited capabilities to correct for local distortions because it uses triangle similarity to find star pair matches between the images being aligned. If you create separate mosaics for individual RGB channels, there can be problems to co-register the three mosaics due to slightly different distortions, especially near the overlapping areas. If this happens, you'll need to align the three mosaics with DynamicAlignment.
To minimize those risks and save two mosaic building steps, I'd implement the following workflow:
- Register the L, R, G and B images for each mosaic panel, using StarAlignment in normal "Register/Match" mode. Now we have four grayscale images for each panel.
- Compose an RGB image for each mosaic panel (using ChannelCombination). Now we have two images for each panel: one is RGB and the other is L (grayscale).
- Remove any residual gradients on RGB and L for each mosaic panel. This step is of crucial importance to achieve a seamless mosaic. The better you do here, the better the final result.
- Build two mosaics with StarAlignment: one for RGB and a second one for L. Be sure to enable the frame adaptation feature in both cases. Now we have two "big" images: RGB and L. Note that both are linear images.
- Now we must register RGB and L, again with StarAlignment. Here we may have problems if there are differing local distortions, especially for wide field images. However, by building only two separate mosaics instead of four, we have minimized the risk, and in turn we have simplified the whole process. If RGB and L cannot be registered accurately on some locations of the mosaic (especially near mosaic seams), then DynamicAlignment must be used instead of StarAlignment. To detect possible registration errors, one may simply compute the absolute difference between both registered images with PixelMath (the expression is: A -- B) and carefully inspect the result. If you have to use DynamicAlignment, don't be scared by the fact that it's a semi-manual process; it is very easy to use and provides extremely accurate results in a few minutes.
- If you want to process the linear images (for example, apply deconvolution to the L image), do it now or forever hold your peace

- Now we have two registered (and possibly processed) linear images: RGB and L. Time to perform the LRGB combination. LRGB requires nonlinear (stretched) images, so you must apply the initial nonlinear histogram transformations to RGB and L at this point. Adjust the L image first, to the desired brightness and contrast. Then try to match the overall illumination of L when you transform RGB. Do it roughly by eye using the CIE L* display mode (Shift+Ctrl+L, Shift+Cmd+L on the Mac). Don't try to do a particularly accurate work here; we'll do much better in the next steps.
- Extract the CIE L* component of RGB with the ChannelExtraction tool (select the CIE L*a*b* space, uncheck a* and b*, and apply to RGB).
- Open the LinearFit tool (ColorCalibration category) and select the L image as the reference image. Apply to the L* component of RGB that you have extracted in the previous step.
- Reinsert the so fitted L* in the RGB image with the ChannelCombination tool.
- Now your RGB and L images have been matched very accurately. Use the LRGBCombination tool with them. You shouldn't change the luminance transfer function, neither the channel weights, as LinearFit has already done the matching job much better than anything you could do manually.
At this point, you have an optimally matched LRGB image, and the fun starts. I have borrowed the last steps to match RGB and L from
another thread, where this method has been reported to give excellent results.
Let us know how it goes.