I'll try to answer your questions.
Can Pixinsight stack and blend these files? Or do you have to use Deep sky stacker or Images Plus etc?
PixInsight can align and stack your images. The required tools are, respectively, StarAlignment and ImageIntegration. Honestly, I think both tools are state-of-the-art implementations of the best algorithms and techniques currently available to carry out these tasks.
The only thing PixInsight can't do —for now— is calibrating your images by applying flat frames, bias frames and dark frames. Well, actually, PixInsight can be used for calibration, but through manual procedures that require much more work than automatized systems (although they can be implemented to achieve more accurate results). As you probably know, we are working on a calibration tool for PixInsight. Hopefully a first version will be available very soon.
In the meanwhile I recommend the excellent DeepSkyStacker application for calibration of DSLR images. Of course DSS can also align and stack your images if you prefer to use it for these tasks instead of PixInsight.
Do you start with DBE first or levels?
You can do it in both ways, but my advice is to apply DBE while the image is linear, that is before any nonlinear stretch with HistogramTransformation (levels? huh? whatsthat?
![Grin ;D](http://pixinsight.com/forum/Smileys/default/grin.gif)
)
When will the histogram function come into play and how does the screen transfer function work to apply it to the image?
HistogramTransformation (HT hereafter) is the best tool to apply a
nonlinear stretch. A nonlinear stretch is necessary because the raw data cannot be represented on display media. That's mainly because our vision system is nonlinear (and display devices are designed to mimic its response), and also due to the typical distribution of values in a linear deep-sky image, which is inherently underexposed.
You apply a nonlinear stretch by dragging the
midtones balance triangle control to the left on HT. In this way HT remaps all pixel values in the image by applying a nonlinear curve (you can see the curve drawn on HT's graphic). The curve enhances the shadows and the midtones but does not saturate the highlights (if properly used, of course). After applying a nonlinear stretch in this way, the data are representable on display media, so you can see the image.
Now the problem is that there are some procedures and tools that work much better when applied to linear data. Deconvolution, for example, doesn't make any sense but if applied to linear data. Color calibration is also impossible with nonlinear data. Background modeling with DBE also works better with linear images, as happens with many wavelet-based processing techniques.
But to apply those procedures, we need to see the image and what happens when we apply them. Here is where STF (Screen Transfer Function) comes into play. STF applies a nonlinear stretch (a complete histogram transformation, actually)
to the screen rendition of the image, but it does not modify actual image data in any way. For that reason, STF is extremely useful to work with linear images: with STF you can anticipate what will happen when you actually apply a nonlinear transformation, while you work with your linear image applying color calibration, wavelets, deconvolution, etc. STF is just a visualization helper, not a true processing tool.
Hope this will clarify things a little.