Yes I see that, I assume the math projects the computations out into a large workspace. To me seem a little like interpolation of data between existing pixels.
No, interpolation is a process of
estimating new data points using existing ones. So the Resample process (
http://pixinsight.com/doc/tools/Resample/Resample.html) can be used to make an image larger. It has a number of different interpolation algorithms (
http://pixinsight.com/doc/docs/InterpolationAlgorithms/InterpolationAlgorithms.html) to estimate the missing data for the new pixels by reference to data in the existing ones.
Any such up-scaled image will contain artefacts that are more or less noticeable depending on the scaling factor and the algorithm used. This is unsurprising since the additional pixels have been estimated and aren't real data.
The drizzle algorithm doesn't estimate new data points at all. It extracts
real data from your set of samples (multiple subframes) to populate the additional pixels that you wish to create. It can do this provided:
- The image is undersampled, i.e. the number of arcseconds per pixel achieved by the optical system and camera has to be lower than the resolving power of the optical system. Put more simply, the camera pixels have to be larger than the optimum for the scope or lens, which is often the case with short focal length refractors and camera lenses, but conversely long focal length scopes may be oversampled by the camera, in which case you already have all the information you're ever going to get out of the image. You can't beat the laws of physics here!
- You need to dither the subframes, so the pixels in each sub do not cover the exactly same part of the sky. It is important that the dithering is not an exact number of pixels, because if the 'footprint' of each pixel on the sky exactly overlaps the footprints of pixels in the other subs you cannot obtain the extra information that you want. So your dithering process needs to re-point the imaging scope by a random amount of pixels plus a random fraction of a pixel each time. Dithering in most guiding/imaging applications will try to do this by default, but even yours it doesn't, in practice I defy you to successfully dither by a precise number of whole pixels between each sub! Mount gearing flaws and field rotation due to imperfect polar alignment will usually do a good enough job of creating the random fractions of a pixel between frames that you need for this to work.
- You need lots of subframes. Dithering isn't 'free data'. Put simply you are taking the total signal you have captured in your set of subs, and spreading it across four times as many pixels, so you can expect the final image to be noisier. Think about the reverse; if you have a relatively noisy image and downscale it to half its original dimensions, it will look a lot less noisy at the cost of a lower resolution, since you've averaged four pixels in to every one so you have four times as many samples per pixel.
If you have met the under-sampling and sub-pixel dithering requirements, the pixels in each sub will contain signal from slightly different parts of the target each time. The pixels therefore contain information which has been resolved by the optical system but not by the camera (it has been 'averaged' together by the sensor element for a pixel) and so of course there is no means to access that data in a single sub. The drizzling algorithm works out the slight differences in pointing between subs and (effectively) aligns the subs at the higher (e.g. 2x) resolution. It then "de-averages" the missing data from the oversampled big pixels in to the new smaller ones using the drizzle algorithm.
There is no magic at work here, you have taken multiple samples of the target and the unresolved details end up in different pixels each time. By slicing the big pixels in to smaller ones and then playing a bit of "3D Sudoku" with the resulting stacks of smaller pixels, you can figure out what the missing numbers should be. (I know neither "de-averaging" nor "3D Sudoku" is a literally accurate analogy of the process, but just trying to illustrate that you can deduce apparently missing information under the right conditions).
The increase in file size should be no surprise of course. You've created an image which is double the resolution of the original images, so you have four times as many pixels (per the explanation above) and thus an uncompressed file will be four times the size on disk. It also means that you have to perform subsequent processing on images that are four times bigger in memory/on disk, which is worth bearing in mind!