When you use drizzle integration you usually increase resolution, so you average less pixels into a single one.
Let's supose that you have an image set of 12 subframes. Now, imagine that you have a faint hot pixel in one of the images, having an intensity of 120 ADUs, and that hot pixel is not filtered out by the rejection algorithm. Because you are averaging 12 images, the integrated pixel will increase only 10 ADUs due to the hot pixel, and it may result faint enough to become unnoticed.
If you apply X2 drizzle, you are only averaging (roughly speaking, since it's a completely different algorithm) 4 pixels; then, the residual hot pixel will have 30 ADUs when you generate the master light through drizzle. It may then become very noticeable.
This happens partially because we need a non-linear stretching to be able to look at the image. Something similar happens with my dark scaling algorithm: It usually has less than 1? error in the scaling factor but, due to the non-linear stretching of STF, a residual of 200 ADUs can look similar in brightness to a 20,000 ADU hot pixel.
I'm out of home writing from my phone, but I have an idea in mind that could work for you. When you run ImageIntegration to apply the outlier rejection to the drizzle files, select the maximum value instead of an average. This will enhance the visibility of any residual hot pixel and will let you better tune the rejection.
Next year I plan to start teaching drizzle in my workshops, even in the beginner level, since it's really the way to go in astrophotography.