Hi Jules,
Georg has nailed it. The basic idea is that we cannot do miracles (well, we actually do, but don't tell anyone
): if there are no background pixels in your image we cannot model the background.
However in the image you've posted you have many 'relatively free' background areas to generate a
reasonable model of your background IMO. With 'relatively free' I mean that these areas can be considered as background to the depth you've achieved in your image, or in other terms, there are no significant structures recorded on these areas. To use DBE with this image, you shouldn't place more that 10 - 20 samples in total. Then we have the upper left corner where there are definitely no free background regions, but DBE should be able to extrapolate appropriate values. As noted, the generated background model won't be perfect but it can be reasonably good. If you have gradients to fix it will be better than nothing, anyway.
N.B.: I have to fix that broken 'Auto Clip Setup' button on HistogramTransformation in Mac OS X PI versions!
and what happends if you apply DBE on separate channels which may have region whitout nebulosity.?
You can try, and sometimes that helps, but DBE works on a per-channel basis so it actually builds three separate models, one for each color channel. So in general you'll get quite similar results.