I think that's a non-issue. Since you're using dozens if not hundreds of alignment stars their relative positions are already averaged. In any case, restacking all images with the same stacked image as reference will never be more accurate than the stack itself. You can't make information.
I agree with you in your first point: after averaging hundreds of alignment stars the residual error usually is very small. However not always you can find hundreds of stars in an image. If you are doing narrowband with an long focal, the number of usable stars can be small.
Your second argument (you can't make information) does not seem as clear to me. The aligment is usually done using the best frame as reference. The alignment is done by
independent pairs, each frame against the reference frame. For each aligned frame you have a small alignment error. When you integrate all the frames, the errors get averaged. If you somehow could decrease the alignment error for each frame, after stacking the averaged error would be smaller.
The question is
¿using an averaged image as the reference frame makes smaller the aligment error? I am not sure of this and I don't know how to prove it mathematically. However I have done a couple of tests this afternoon and it seems that at least in these tests the aligment error is smaller when using the averaged reference frame. The tests consist in aligning the same frame using as reference frame both an averaged image and one of the frames cropped to the same area (to the nearest pixel) as the averaged image. Attached to this message there is the comparison between the result of the alignment of two frames using the two different references. The RMS error using the averaged reference is clearly lower.
Does somebody (with fresher mathematical knowledge than me) know how to address mathematically this problem in order to determine if my intuition is right or wrong?