New DrizzleIntegration Tool Released

georg.viehoever said:
themongoose85 said:
PI has hung for me during ImageIntegration on "Updating drizzle data files" and has become unresponsive I am letting it go to see if it completes.
Check if you are short of RAM (for instance via task manager). Drizzle needs a lot.

Yeah I checked and I was only using about 55% of my total physical memory so I am not sure it was that. I killed PI and restarted it. I am just doing a stack without drizzle now to make sure it works and then I'll try adding in drizzle again and see what happens.
 
Hi,

It's great to see this function implemented in PI. To really see how it works, we should use some clearly under-sampled images in a crowded field, and compare results of:
a. normally integrated image
b. drizzle (2x for example) integrated image
c. 2x oversampled (no drizzle) and then integrated image
I will see if I can find time to do this.

I also like the suggestion made by Astrocava.  This is quite essential to get high-quality results from DSLRs.  At this moment, it is implemented by DSS (called Bayer drizzle), but DSS is an old tool.  It will be great if PI can have this function.

Cheers,
Wei-Hao
 
Georg I am using Win 7 Pro with a core i5 2500K and 12GB of ram. It stacked fine without selecting Generate Drizzle in ImageIntegration. I am trying again with it selected to see if it completes.

EDIT: PI is still hanging updating the drizzle files but when I refresh the folder I see the drizzle files growing in size so I am going to leave it for a while to see if it completes.

EDIT2: Moving the files off my media server and onto my local hard drive solved the problem. The drizzle data files updated a lot faster.
 
I upgraded to the latest version this afternoon.  (.1071)  and I don't see any drizzle options in the Star Alignment or Image integration processes. 

Is there a switch that I need to turn on?  What am I missing?

Thank you,

Fred
 
Last version is 1092 (from memory), are you sure you are not on one of the RC or something like that.  Maybe you should download a fresh one from the site.
-- bitli
 
IanL said:
jerryyyyy said:
Yes I see that, I assume the math projects the computations out into a large workspace.  To me seem a little like interpolation of data between existing pixels.

No, interpolation is a process of estimating new data points using existing ones. ...

Hi Ian,

Fantastic post and a very nice description of drizzle!
 
Astrocava said:
I'm thinking how to use this new tool with OSC (DSLR) images. DSRL images are spatially under sampled in origin because each channel has only half of the pixels.

Hi Sergio,

I am working on this right now. The current tool set can be used without changes to implement Dave Coffin's Bayer drizzle technique very easily. It can be integrated in the BPP script.
 
Juan Conejero said:
Astrocava said:
I'm thinking how to use this new tool with OSC (DSLR) images. DSRL images are spatially under sampled in origin because each channel has only half of the pixels.

Hi Sergio,

I am working on this right now. The current tool set can be used without changes to implement Dave Coffin's Bayer drizzle technique very easily. It can be integrated in the BPP script.

Excellent news, Juan! Are you working on a PI implementation of Bayer Drizzle, or an improved variation of it? Is Sergio?s proposed approach using SplitCFA the best one?

Ignacio
 
Bayer drizzle is pretty straightforward. One complication is the fact that CFA calibrated images have to be debayered prior to registration and pre-integration, but DrizzleIntegration has to have access to the CFA images. This can be solved by means of a little trick: replace the file names to which .drz files point to (which are debayered images) by the original CFA images. The other complication is that CFA images have to be split into separate RGB components in order to drizzle them, but we already have this: just load the CFA data as raw Bayer images. So the implementation looks like this:

1. Load a calibrated CFA image as a monochrome raw Bayer image.

2. Optionally, apply CosmeticCorrection.

3. Make a duplicate of the calibrated (and possibly cosmetized) CFA image and transform it to an RGB raw Bayer image (that is, split the RGB components as three channels). Save this RGB image in FITS format.

4. Apply Debayer to get an interpolated RGB image and save it in FITS format.

5. Apply StarAlignment to register the RGB image and save it. The generate drizzle data option of SA is enabled in this step.

6. Open the .drz file generated in step 5 and replace the file path (which points to the debayered image that StarAlignment has worked with) with the path to the RGB image that was saved in step 3. Save the modified .drz replacing the original data file.

7. Repeat steps 1-6 for all images in the batch task.

Now you can use ImageIntegration (with the registered images and the .drz files) and DrizzleIntegration (with the .drz files) in the usual way. The drizzle process will work with the original CFA data, not with interpolated data. This is the Bayer drizzle algorithm.
 
Many thanks, Juan. This makes perfect sense, as you preserve full resolution spatial information during registration, and then apply the geometric transformations to each color (bayered) matrix.

Question: can't there be information holes (by chance) that may require some level of interpolation/normalization at the end?

Ignacio
 
That depends on the number of images and the quality of dithering. You'll definitely need many dithered images. Bayer drizzle can be used with a drizzle scale of one as a non-interpolating deBayering method. It can be used also with drizzle scale > 1, but in this case a huge amount of data can be necessary to fill all the holes. With the necessary data sets, I think that both methods can yield very good results.

We currently don't have enough test data to draw conclusions in numbers terms. So all tests made by users will be very important. The new version of BPP with Bayer drizzle enabled should be ready in a couple days.
 
Thanks, Juan. Will try a first test (drizzle 1x) with my recent OmegaCent data, and compare with the standard drizzle 2x workflow.

Ignacio
 
Juan Conejero said:
IanL said:
jerryyyyy said:
Yes I see that, I assume the math projects the computations out into a large workspace.  To me seem a little like interpolation of data between existing pixels.

No, interpolation is a process of estimating new data points using existing ones. ...

Hi Ian,

Fantastic post and a very nice description of drizzle!

Yes, absolutely great explanation for someone at my level.  Just to kinda restate it in my language and in reference to my scope.  My 500mm FL Takahashi 180ED/SBIG STT8300M combo results in 2.21 arc"/pixel images.  This is undersampled in comparison to say my old 2000mm scope.  But, when my images are dithered the collected the images contain all the data that might have been picked up with the 2000mm scope.  This procedure wrings out the data that is actually in that undersampled data. 

I have to say I can see the difference when I run the new procedure.

One other question... how close to regular deconvolution does this procedure come... lazy me is always looking to avoid a laborious step...???
 
This has nothing to do with deconvolution :)
Although it would make things much easier, since the PSF will be better sampled, and data should correlate better with it.
On a side note, deconvolution and drizzle can be integrated into a single operation, and that is called super-resolution. We may implement this in the medium term.
 
Hello,
I would like to post my first try with DrizzleIntegration.
My data is
a) undersampled (Canon EF200mm lens and KAF8300 Chip)
b) dithered
so I gave it a try and the result was a surprise. The attached file is a enlargement
of this image http://astrob.in/81586/C/ whitch is still without Drizzle at the moment. Left is without and right is with Drizzle.
Thank you for this tool.
 

Attachments

  • Drizzle.jpg
    Drizzle.jpg
    295.9 KB · Views: 307
Albert, undersampling is having less samples than what is needed to have an appropriate approximation to the underlying light distribution. In astronomy, this is related both to the optical resolution and the seeing. From the perspective of the sensor, roughly, you need 2 or 3 pixels to cover the FWHM of your stars. So, if you have very good seeing, or you are using a color filter array (as in the dslr cameras), you end with less samples than what is needed, and this is translated as a loss in sharpness.
 
Thanks Carlos. Does that basically mean if you have a lot of subs you don't need to use drizzle?
 
It is not about the number of subs, but the spatial sampling. It is the size of the pixels and the coverage in the sky, compared to the seeing and scope's resolution.

This is closelly related to this:
http://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem
 
Back
Top