What I've found for my images, and this is personal taste, is usually using a background image and a setting of 0.09. If this seems too different than what I'd like I experiment between that and 0.14. Regardless I usually find that I'll followup with a curves stretch to increase contrast which is what I usually do after using the Histogram Stretch tool as well. For my luminance after calibration, alignment, stacking, crop, dbe as needed (almost always), I'll do a masked stretch with a sampled background (preview) void of stars and object, HDRW, maybe an unsharp mask, masked Local histogram stretch, and then a final curves adjustment. If I have enough data I usually don't need any noise reduction (in my opinion) but even still I would combine with my RGB image before and see how the result looks. What I have found in PI is usually to get an appealing L+RGB image (again personal taste) I usually need a darker than I would normally do luminance background. If and when we get layering in PI this would be a bit easier by adjusting the opacity of each layer. After the final image is made I'll look at possible noise reduction. I try to stay away from this process as I usually see the final image blurring the sharper details but then it may be my use of noise reduction and I'm doing it wrong.
The only real difference when creating the RGB image is I add the background neutralization and color calibration after DBE. Most everything else follows except the HDRW on the RGB data unless there is no luminance for that image. Then a dynamic align (I bin my RGB 2x2 and Lum 1x1) and use LRGB Combination to create the L+RGB image. Then maybe noise reduction if needed.
But then again there are far better PI processors here than me but this is my basic routine.
-Steve