Hi Sander,
For those of us who do have the luxury of using cameras with 'no dark current', then there is an argument that can be made for not needing Darks (either FlatDarks or LightDarks). There is also then the argument that Biases are all that are needed to 'DarkSubtract' the Lights and Flats.
With only Biases thus being needed, it could also perhaps be argued that there is no need for sophisticated pixel rejection algorithms in the ImageIntegration process (an area where, I feel, PixInsight is 'better' than DeepSkyStacker).
The same argument could therefore also be made for your Flats - if you take the time to make enough of them, and also take the time to make them with care, then very simple 'Averaging' may be all that is needed to generate a 'statistically clean' MasterFlat.
And DSS will do that perfectly adequately, as well as correctly applying the MasterFlat to the Bias-corrected series of Lights.
Thus far, there is perhaps little difference between the two packages - given that the user is capturing data with a 'top-quality' CCD imager (remember, I am NOT, so all of the arguments above DO NOT APPLY - in other words PixInsight definitely makes my interim 'masters' MUCH better, in terms of eliminating 'statistically evaluatable noise')
So, now, you have a series of CalibratedLights - and these need to be Aligned (or Registered), and then ImageIntegrated (or Stacked).
Does DSS perform a better job than PI when it comes to image alignment? I don't know. I suspect not, but I also believe that the actual difference between the two may be very small, given enough suitable referece stars in the image series. Certainly I have never been able to 'fault' DSS in this area (or, indeed, in any of its capabilities - other than working with Meade DSI I and II OSC images).
So, no real reason, so far, to 'choose between' the two methods (phased or pipelined) - other than the amount of 'manual effort' needed to (currently) achieve results in PI.
But, I still maintain that PixInsight achieves 'better results' when it comes to the final stage of ImageIntegration - simply because of the (controllable) power of its pixel rejection algorithms, and the way PI can 'evaluate noise' for each image in a stacking series, allowing each image to 'only add what it should' to the final output.
That said, of course DSS can (easily) be used to calibrate and align the source data, leaving the final stage of integration (or stacking) to be done in PI.
So, no, I am not 'disagreeing' with you - there IS a perfectly good case for saying that it is 'easier' to get images pre-calibrated in DSS - perhaps even right through to stacking them as well. And, by doing this, a lot more time is left for the user to get into the 'meat' of the processing phase - which is where they will then have ALL of the Power of PI at their disposal (and, of course, a huge learning curve as well

).
I hear what you are saying about the 'voodoo-magic' of PI - and, yes, I agree that this applies to image calibration as well, but, let's be honest here - there is NOT really a lot of voodoo involved in getting your images calibrated in PixInsight. It really is more of a case of having to 'feed the pipeline' with the correct data, at the correct time.
And that is where DSS excels, above PixInsight - the process CAN be 'automated' - you load the apporpriate image series into the appropriate containers, configure the process to do what you want, press the 'GO' button and watch the gears a-whirring in your PC whilst DSS churns out your master image. Does that 'help' the novice? Yes, sure, just the same way as calculators nowadays help our children divide 98 by 7.
For someone who HAS a good understanding of the process, they CAN then make an educated choice about which method they will use to get their data cakibrated. For someone who does NOT have a clear grasp of the stages involved in calibration, then actually EITHER package will help 'teach' them what needs to be done - and I would happily support either for that purpose. But, for those who are willing to put the effort into calibrating with PixInsight, they will just be using 'more powerful' tools to learn the process.
And, I am NOT condoning 'allowing people to calibrate their data incorrectly' - in fact I am a proponent for exactly the opposite. I am all for empowering users to make the effort to fully understand the process, so that they DON'T do it incorrectly, even though they have FULL CONTROL of the powerful tools provided by PI.
Don't get me wrong, I fully support the notion of an automated 'pipeline' process, tied in directly with the native GUIs of the sub-processes (and thus 'evolving' as the sub-processes evolve) - and I fully expect this to become available as PI matures further. And, when it does, I believe that there will be no 'need' to turn to DSS to 'simplify' the process.
However, do we need it 'today' (or even 'soon')?
No, not at all. For those who care to 'work' (or 'struggle', for those who really feel that the existing PI method is such a 'chore) the method in PI, all the 'phases' are currently available, bringing with them all the power of ImageContainers, ProcessContainers, etc., and the ability to save workflows to disk for later re-use, or re-play.
For those who may be intimidated by this extra level of complexity, or for those who just don't need the full Power of PI at this stage of their processing workflow, then there is no reason why they should not use DSS, or CCDStack, or Iris, or Nebulosity, or the myriad other programs that are available out there.
So long as they have ended up with as good as possible a 'master image' for all the PixInsight processing stages that will be ahead of them
