Hello Alejandro,
thanks for your comment.
Yes maybe I'm applying BN/CC not correctly. See the attached example. The upper right picture was the result after traditional BN/CC followed by MaskedStretch, the lower right with using my described method of transfering STF to HT (keep median), then MaskedStretch.
The BN/CC result looks "wrong" for me, with a green/brownish coloring and is overall very sensitive to where I place the black and white previews.
The STF/HT method instead doesn't need any user guidance --why should I "hint" PixInsight of what I assume should be black when the STF in full auto mode does better than me?
My wish for linear data processing is that ideally there should be no user interaction at all... let's see what I normally do in linear stage:
- background extraction: none necessary if taken under good skies and calibrated with proper flats, otherwise ABE with reduced polynomial degree (2). I nearly never use DBE and set samples manually -- normally this is an indicator that flats are not correct or stray light entered optical system
- background neutralization and color calibration -- discussed this thread
- deconvolution: could be highly automated. Why do I have to pick 30 or more stars manually, generate masks with standard procedures etc. etc. I rather would like to enter high order commands like "reduce FWHM from 2,5 to 2 pixel" or so...
![smile :)](http://pixinsight.com/forum/Smileys/default/smile.gif)
- denoising: for me this is in PixInsight often an endless parameter tweaking and trial and error procedure. I would prefer e.g. an anscombe transformation and let the computer do a wavelet based denoising based on only one parameter (sigma).
etc. etc.
Creativity can start once the data is stretched!
Rüdiger