It would be nice if PI could warn you that the current process applied to a preview (that is not the entire image) will not be representative of what will happen to the entire image.
Currently you can expect this kind of problems with just a few processes in PixInsight (mainly multiscale algorithms):
- The HDRWaveletTransform algorithm requires the whole existing range of brightness values to be well represented on the preview. By well represented I mean that the entire range must be represented at all the scales involved in the HDRW transform (depending on the number of wavelet layers selected). In practice, this means that unless you have a relatively small galaxy surrounded by free sky background, your preview must be complete. This is by far the worst case of non-previewable algorithm.
- Regularized deconvolution can give similar problems, although they are usually much more avoidable/controllable, mainly because wavelet regularization acts at small scales.
- The wavelet-based noise reduction algorithm implemented in ATrousWaveletTransform can also be slightly inaccurate for the same reason, but usually the differences are negligible.
- UnsharpMask and other convolution-based algorithms, when applied with very large filters (for example, from 100 to 250 pixels) can also give some problems with relatively small previews. In this case, the reason is that fixing boundary artifacts with very large convolution filters may lead to inaccurate results. In PixInsight/PCL, boundary artifacts are fixed by padding with mirrored pixels from the borders of the image. When the padded regions are comparatively large, they may be very different from what actually exists around the preview on the whole image.
Other that these, there should be no problems with partial previews.