New DrizzleIntegration Tool Released

Juan Conejero

PixInsight Staff
Staff member
Hi everybody,

Today I have released new versions of the ImageRegistration and ImageIntegration tools, along with a new version of the BatchPreprocessing script, which support a new PixInsight tool: DrizzleIntegration.

The Variable-Pixel Linear Reconstruction [1] algorithm, better known as drizzle, was originally developed at the Space Telescope Science Institute to process Hubble Deep Field images. Drizzle is an algorithm for the linear reconstruction of images from undersampled, dithered data. The new DrizzleIntegration tool brings this fundamental image processing technique to the PixInsight platform to fill a long-standing gap in our image preprocessing tool set.

There are plenty of resources on the Internet and the literature that provide general and in-depth descriptions of the drizzle method. In my opinion, this important algorithm is generally little known among the astrophotography community, mainly because the existing implementations lack the necessary flexibility and/or are too limited to be of practical value. With the new tool that we have just released, I hope this will change definitely for all PixInsight users.

With the appropriate data sets, the results of drizzle can be spectacular. To use drizzle in a useful way, you need the following:

- Undersampled images. If your images are already well sampled, drizzle won't give you anything that you don't already have besides a good bite out of your RAM. You know that your images are undersampled when your stars look square, or when you measure the PSF (for example, with the DynamicPSF tool, or the FWHMEccentricity script) and get FWHM values smaller than about two pixels. For example, images acquired with photographic lenses or small refractors are typically undersampled, but also images acquired with larger instruments and sensors with large pixels.

- Dithered images. Dithering is always important, but for drizzle it is absolutely necessary. Without proper dithering, all input pixels will always be projected over the same output pixels by drizzle, and the reconstruction process won't work.

- Many images. Drizzle requires more images than a normal integration. The more the better, but typically you should acquire at least 15-20 images to achieve good results.

In this presentation I'm just going to describe the procedure to perform a drizzle integration in PixInsight. I'll show some practical examples in forthcoming posts.

Using the DrizzleIntegration Tool

A drizzle integration of images is a three-step process in PixInsight involving registration, pre-integration, and drizzle reconstruction. This allows us to enrich the drizzle task with all the power and flexibility implemented in our StarAlignment and ImageIntegration tools.

Step 1. Registration

The drizzle algorithm works by projecting input image pixels on a finer grid of output pixels. This applies the same geometrical transformations used to register images in a normal preprocessing task, but instead of being an isolated step, image registration is performed during the drizzle integration process directly from calibrated data without interpolation. This requires to pre-compute and store the image registration transformations. To this purpose the StarAlignment tool has a new option to generate drizzle data, as shown on the next screenshot.

drizzle-sa.png

When this option is enabled, StarAlignment generates a drizzle data file for each registered image. Drizzle data files carry the .drz suffix and store all the information required by the DrizzleIntegration tool, including image registration data, statistical data, and pixel rejection maps. DrizzleIntegration supports the same image registration devices implemented by StarAlignment, including projective transformations (homographies) and two-dimensional surface splines (thin plates).

If you use the BatchPreprocessing script, its latest version 1.35 has a generate drizzle data option that you should activate to create drizzle data files during the image registration phase:

drizzle-bpp.png

Note that StarAlignment (used either directly or indirectly through the BPP script) will always create new drizzle data files with fresh registration data, so existing .drz files will always be replaced to start a new drizzle integration procedure.

Step 2. Integration

To use the DrizzleIntegration tool, the registered images generated by StarAlignment must be integrated with ImageIntegration, and the corresponding .drz files must also be selected. The following screenshot shows an example where a set of registered images is being pre-integrated as part of a drizzle procedure.

drizzle-ii1.png

First you must select the registered images that you want to integrate, as usual. Then you have to select the drizzle data files generated by StarAlignment, by clicking the Add Drizzle Files button. Note that when a .drz file has been associated with an input image, a special "<d>" indicator is shown on the Input Images file list for the corresponding item. For a drizzle data file to be associated with its corresponding registered image, both files must have the same file name (only different suffixes).

Second, you have to activate the generate drizzle data option on ImageIntegration. Then you can proceed to integrate the images as usual: Find optimal pixel rejection parameters and maximize SNR in the result, just as you do for normal image integration tasks. Each time you run the ImageIntegration tool, the selected .drz files are updated with statistical and rejection data automatically. Make sure you perform a last integration without a selected region of interest.

Note that drizzle files cannot be selected for integration on the BatchPreprocessing script. This shouldn't surprise you, since the image integration feature of BPP is for previewing purposes only, not for generation of production images. Image integration must always be fine tuned manually, and drizzle makes no exception to this rule.

Step 3. Drizzle

After StarAlignment and ImageIntegration, the drizzle data files (*.drz) are now ready for the DrizzleIntegration tool. Using this tool is really easy: just select your .drz files, execute the tool globally, and wait until the process completes and you get a drizzle integrated image.

drizzle-di.png

The drizzle algorithm can be controlled with two main parameters:

Output scale, or subsampling ratio. This is the factor that multiplies input image dimensions (width, height) to compute the dimensions in pixels of the output integrated image. For example, to perform a 'drizzle x2' integration, the corresponding drizzle scale is 2 and the output image will have four times the area of the input reference image in square pixels.

Drop shrink factor. This is a reduction factor applied to input image pixels. Smaller input pixels or drops tend to yield sharper results because the integrated image is formed by convolution with a smaller PSF. However, smaller input pixels are more prone to dry output pixels, visible patterns caused by partial sampling, and overall decreased SNR. Low shrink factors require more and better dithered input images. The default drop shrink factor is 0.9, and typical values range from 0.7 to 1.0.

Along with these important parameters, DrizzleIntegration allows you to enable/disable pixel rejection, image weighting, and the use of surface splines (when available) for image registration.

Finally, you can define a region of interest (ROI) to accelerate repeated tests. This is useful because the task's execution time (and also its memory space consumption) grows quadratically with the dimensions of the output integrated image. Note that the coordinates and dimensions of the ROI are expressed in input reference pixels, not in output pixels. You can define ROI coordinates on an integrated image generated by the ImageIntegration tool, or on a registered image created by StarAlignment, by defining a preview and clicking the From Preview button.

DrizzleIntegration always generates two images: the result of the drizzle reconstruction and a drizzle weights image. The value of each pixel on the weights image represents the (normalized) amount of data gathered by the corresponding pixel on the integrated result. Note that the integrated image has already been divided by the weights image when both of them are made available as new image windows.

____________________
[1] Fruchter AS & Hook RN, Drizzle: A Method for the Linear Reconstruction of Undersampled Images, PASP, 114, 144
 
Thanks for your hard work Juan. Having drizzle without registration interpolation and with proper rejection is excellent!

Mike
 
Juan I just downloaded the new update. I tried to run Star Alignment on some data I recently acquired and the program crashed with this message: "Critical Signal Caught (11) Segmentation Violation."

This has never happened before and I am wondering if it has anything to do with the update?

Thanks

Albert

 
Hi Albert,

Yes, there was a problem with yesterday's update on Mac OS X. It is now fixed with a new update that I've just released. Sorry for the trouble!
 
This is great, Juan, thanks!

For dithering, is there a 'best' amount? Is more separation between images better, leading to wider crop regions at the edges? Or are a couple (under 10) pixels of movement per image okay? Perhaps it doesn't matter as long as they're not *right* on top of each other.
 
I wonder if there is an interaction between drizzle and distortion correction. Will distortion correction possible correct away the slight differences between images that are necessary for the drizzle method? Or are both methods compatible with each other?
Georg
 
Great work, Juan!

I'm thinking how to use this new tool with OSC (DSLR) images. DSRL images are spatially under sampled in origin because each channel has only half of the pixels.

I think the proper step by step process would be:

1.Calibrating frames as usual.
2.Split CFA in four channels (RG1G2B)
3.Registration with drizzle data each of the channels (*)
4.Integration of each channel(*)
5.Drizzle integration (Scale=2 and shrink=1)(*)
6. Align channels with green as reference.

(*) Perhaps is best to combine G1 and G2 data, because you will benefit from increased number of frames yielding for a better spatial resolution on the green channel.

Or can the BPPScript to do all the work except steps 5 and 6)?

Sergio


 
Hi!

I'm able to crash PixInsight after running the DrizzleIntegration process. After running the process, I close the weighting image, then auto-STF the result file by pressing Ctrl+A and then enabe the 24-bit lookup table for STF by pressing the button on the toolbar. I'm running Windows 8.1 64-bit. Also, saturated parts of the images (some stars and galaxy cores) turn black in the drizzled image.
 
Beautiful, Juan! Thanks for the hard work! I got new data last night and plan to try this asap.

BTW, reading about the drizzle files with geomertic registration information, reminded me about the Theli discussion we had way back, as well as my top "wish-list" item: postponing debayering of OSC images to the intergation stage, where SNR is best. Any plans in this direction?

thanks again,
Ignacio
 
Astrocava said:
Great work, Juan!

I'm thinking how to use this new tool with OSC (DSLR) images. DSRL images are spatially under sampled in origin because each channel has only half of the pixels.

I think the proper step by step process would be:

1.Calibrating frames as usual.
2.Split CFA in four channels (RG1G2B)
3.Registration with drizzle data each of the channels (*)
4.Integration of each channel(*)
5.Drizzle integration (Scale=2 and shrink=1)(*)
6. Align channels with green as reference.

(*) Perhaps is best to combine G1 and G2 data, because you will benefit from increased number of frames yielding for a better spatial resolution on the green channel.

Or can the BPPScript to do all the work except steps 5 and 6)?

Sergio

I haven't experimated much yet, but just calibrated my images as normal, debayered them and then followed the tutorial steps as given by Juan above.  Results seem reasonably good for a first attempt.  See below for a sample from a reasonably small data set of 24 x 600 seconds on a Canon 500D and SW 80ED & 0.85FR.  Left image in comparison is a normal integration, right image is a drizzle integration with the default 2x scale and 0.9 drop shrink.  Looking at stretched data there is a clear increase in noise, and trying to push the drop shrink any further than 0.9 lead to obvious artefacts.

 

Attachments

  • drizzle.png
    drizzle.png
    358.8 KB · Views: 476
A first try for me also looks good. 90 images @85 mm, left conventional stack, right drizzle stack.
Georg
 

Attachments

  • drizzle.JPG
    drizzle.JPG
    49.8 KB · Views: 443
I tried a couple of runs, on 30 registered frames from a canon 6D, on my win7x64 PC with 12GB ram, and in the last step (DrizzleIntegration) I got the blue screen of death in both tries! First time ever this happens to me with pixinsight.

Ignacio
 
FINALLY!  Thank you, thank you, thank you!!

Up to this point I've had to drop out of PixInsight, Drizzle in Registax and back into PixInsight.

And it works wonderfully!  Ran several sessions last night and beautiful!

bwa
 
Joining the bandwagon.  My comparisons also show the superiority of the new method.  Can someone explain to me why the new files are 4x larger?  I get the 2X setting.  Is the computation projected into a larger image. 
 
georg.viehoever said:
jerryyyyy said:
...Can someone explain to me why the new files are 4x larger?
Images are twice the size in X, and twice the size in Y, and 2*2=4  :)
Georg

Yes I see that, I assume the math projects the computations out into a large workspace.  To me seem a little like interpolation of data between existing pixels. 
 
Not really interpolation.  You may want to browse the reference in the announcing article: even without understanding all the details it gives a good feeling of what happens.  It really takes advantage of the exact location of the image to find where the pixel should land - which should be statistically correct (assuming your image is indeed undersampled).
-- bitli
 
jerryyyyy said:
Yes I see that, I assume the math projects the computations out into a large workspace.  To me seem a little like interpolation of data between existing pixels.

No, interpolation is a process of estimating new data points using existing ones. So the Resample process (http://pixinsight.com/doc/tools/Resample/Resample.html) can be used to make an image larger.  It has a number of different interpolation algorithms (http://pixinsight.com/doc/docs/InterpolationAlgorithms/InterpolationAlgorithms.html) to estimate the missing data for the new pixels by reference to data in the existing ones.

Any such up-scaled image will contain artefacts that are more or less noticeable depending on the scaling factor and the algorithm used.  This is unsurprising since the additional pixels have been estimated and aren't real data.

The drizzle algorithm doesn't estimate new data points at all.  It extracts real data from your set of samples (multiple subframes) to populate the additional pixels that you wish to create.  It can do this provided:

- The image is undersampled, i.e. the number of arcseconds per pixel achieved by the optical system and camera has to be lower than the resolving power of the optical system.  Put more simply, the camera pixels have to be larger than the optimum for the scope or lens, which is often the case with short focal length refractors and camera lenses, but conversely long focal length scopes may be oversampled by the camera, in which case you already have all the information you're ever going to get out of the image.  You can't beat the laws of physics here!

- You need to dither the subframes, so the pixels in each sub do not cover the exactly same part of the sky.  It is important that the dithering is not an exact number of pixels, because if the 'footprint' of each pixel on the sky exactly overlaps the footprints of pixels in the other subs you cannot obtain the extra information that you want.  So your dithering process needs to re-point the imaging scope by a random amount of pixels plus a random fraction of a pixel each time.  Dithering in most guiding/imaging applications will try to do this by default, but even yours it doesn't, in practice I defy you to successfully dither by a precise number of whole pixels between each sub!  Mount gearing flaws and field rotation due to imperfect polar alignment will usually do a good enough job of creating the random fractions of a pixel between frames that you need for this to work.

- You need lots of subframes.  Dithering isn't 'free data'.  Put simply you are taking the total signal you have captured in your set of subs, and spreading it across four times as many pixels, so you can expect the final image to be noisier. Think about the reverse; if you have a relatively noisy image and downscale it to half its original dimensions, it will look a lot less noisy at the cost of a lower resolution, since you've averaged four pixels in to every one so you have four times as many samples per pixel.

If you have met the under-sampling and sub-pixel dithering requirements, the pixels in each sub will contain signal from slightly different parts of the target each time.  The pixels therefore contain information which has been resolved by the optical system but not by the camera (it has been 'averaged' together by the sensor element for a pixel) and so of course there is no means to access that data in a single sub.  The drizzling algorithm works out the slight differences in pointing between subs and (effectively) aligns the subs at the higher (e.g. 2x) resolution.  It then "de-averages" the missing data from the oversampled big pixels in to the new smaller ones using the drizzle algorithm.

There is no magic at work here, you have taken multiple samples of the target and the unresolved details end up in different pixels each time.  By slicing the big pixels in to smaller ones and then playing a bit of "3D Sudoku" with the resulting stacks of smaller pixels, you can figure out what the missing numbers should be. (I know neither "de-averaging" nor "3D Sudoku" is a literally accurate analogy of the process, but just trying to illustrate that you can deduce apparently missing information under the right conditions).

The increase in file size should be no surprise of course.  You've created an image which is double the resolution of the original images, so you have four times as many pixels (per the explanation above) and thus an uncompressed file will be four times the size on disk.  It also means that you have to perform subsequent processing on images that are four times bigger in memory/on disk, which is worth bearing in mind!
 
PI has hung for me during ImageIntegration on "Updating drizzle data files" and has become unresponsive I am letting it go to see if it completes.
 
themongoose85 said:
PI has hung for me during ImageIntegration on "Updating drizzle data files" and has become unresponsive I am letting it go to see if it completes.
Check if you are short of RAM (for instance via task manager). Drizzle needs a lot.
 
Back
Top