NEOWISE Comet Processing Notes

By Vicent Peris (PTeam/OAUV)
Published August 8, 2020


Introduction

This article describes some specific or unusual image processing techniques that I have implemented for my pictures of comet NEOWISE. You can see these pictures in PixInsight Gallery, as well as in OAUV's website.

The images presented here have been acquired with two different telescopes:

  • A narrow field image acquired with a Planewave 20 inch f/6.7 CDK telescope and an FLI Proline PL16803M camera.
  • A wide field image acquired with a Canon 400 mm f/2.8 lens and an FLI Microline ML16200 camera.

We are going to review four keypoints of the processing workflow:

  • The use of LocalNormalization to correct the shadows of blocking objects near the horizon.
  • The alignment of the wide and narrow field images using astrometric solutions.
  • Transferring a color calibration from a reference single frame to the master image.
  • Using multiscale and local contrast enhancement tools to control the low-contrast features of the comet tails.

Removing Blocking Objects Near the Horizon

Since the wide field telescope is set up in a dome, it was not possible to avoid some objetcs blocking the line of sight to the comet when it was very low above the horizon. For example, the following animation shows the structure of a roll-off building passing throughout the comet image sequence:

Since this is a thin post in the line of sight, it produces shadows that are not completely opaque, similar to a dust shadow in a flat field image. Therefore, we can correct these moving shadows in the image sequence by using LocalNormalization. We'll use a reference subframe that isn't affected by the shadow to apply LocalNormalization to the entire sequence. These are the tool seetings:

The reference image is the third subframe in the sequence. This is the one acquired just before the shadows appear in the image. This frame was selected because it is the unaffected one with a darkest sky—remember that this comet was very close to the Sun, which forced us to start shooting in the twilight.

Note that the No scale component check box is checked. Unchecking it in this image could be dangerous because there are large areas of empty sky that could originate false structures coming from the noise on the background areas. Therefore, LocalNormalization is simply adapting the local sky background level between the reference and the target images.

We set the Apply normalization parameter to View execution only, so the tool will only generate the XNML files. This process is applied to the image set of each filter, and then we integrate the three master images using the XNML files with ImageIntegration.

In the following animation, we can see that the process removes the shadows almost completely from the images:

Aligning the Wide and Narrow Field Images

The published wide field image is a combination of both the narrow and wide field images, since the CDK shows much more detail in the comet's head. To combine them, we should align the comet, but the master images don't have any stars, so we cannot use StarAlignment to register the 20-inch telescope image to the wider one. Below you can see the images we want to align:

Wide field master image


Narrow field master image

To overcome this problem, we'll perform an astrometry-based alignment. First, we select a single subframe from each image set:

Single frame of the wide field image set

Single frame of the narrow field image set

By computing an astrometric solution for both images, we can find the rotation and image scale differences between both images. Once this is done, we can rotate the narrow field master image with the Rotation tool, and resize it with the Resample tool to fit the rotation angle and the image scale of the wide field image:

Now we have the comet with the same scale and rotation on both the narrow field and wide field images. Then we need to align both comet nuclei. We'll do this with DynamicAlignment, by simply selecting the comet nucleus on each image. This will align both comets:

Calibrating the Color of a Comet Image

In this section we review the color calibration steps needed before attempting to combine both wide field and narrow field images. Our starting point are two master images, each being an integration of its respective subframes. The comet in the narrow field image has been aligned with the comet in the wide field image.

Wide field master image


Narrow field master image

If we want to calibrate the color of these images, we have to face a problem: the images are almost starless, since the stars are rejected by ImageIntegration, having only faint traces of the stars due to the linear displacement of the comet nucleus. Therefore, we cannot use stars to calibrate the color of these pictures.

To solve this problem we calibrate the color in a reference RGB image that is composed of a single frame for each filter. So we first compose a color image from the third frame of the R, G and B filters. Since this is a single subframe from each filter, the composed image has all of the stars.

For this methodology to work properly, we first need to align the comet in the entire data set with the CometAlignment tool. Therefore, the stars in the reference image will show a little color displacement, since the comet has moved from frame to frame. Below we show this reference image before any color adjustment:

Now we can perform a photometry-based color calibration with the PhotometricColorCalibration tool:

Here are the parameters used:

Note that we are using a Sun-like star by setting the White reference parameter to "G2V Star". We chose this white reference in this case because the comet is actually illuminated by the Sun. We also set a big aperture of 12 pixels to be gentle with the possible effects of the color channel displacements in the stars.

Now that we have calibrated the color of the reference image, we need to transfer the color balance to the master wide field image. We'll also perform this transfer to the close-up image of the 20-inch telescope.

The color balance transfer is performed with LinearFit. This process will adapt the signal of the master images to the reference image. It will work even if we don't have stars in the master image, since the position of the comet is the same in the reference and master images. For the wide field image, we can simply use LinearFit by setting the proper reference image and applying it to the master image:

This process effectively corrects the color of the wide field master image:

However, we need to transfer the color balance very accurately over the comet's head area because there we'll integrate the 20-inch telescope image, so both the wide and narrow field images can be exactly fitted. For this reason, we'll calculate the linear fit specifically in the small common area between the three images. This area is highlighted by the preview in the figure below:

We can clone this preview on the master images and then make these previews independent images, where we'll calculate the linear fitting parameters. So now, besides the original three images, we have three new ones:

We'll set the identifier of the reference preview image to "ref_color", and will select it as the reference image in LinearFit:

Now we are going to perform a linear fitting between the reference and the previews of the master images. This operation will transfer the color balance:


As you probably have figured out, we are only transferring the color balance between small areas of the image, but not on the entire images. To be able to work on the entire images, we should look at the linear equations that LinearFit writes to the console; we'll apply these equations to the entire master images with PixelMath. Below we show the equations to transform the wide field master image:

And the corresponding PixelMath instance:

Note that we'll need different equations for each master image. We just have to look for the required numbers in the console. We'll apply these equations to the entire master images. These are the master images after the PixelMath processes:

Color-calibrated wide field master image:

Color-calibrated narrow field master image:

After these transformations, we can safely mix both images because they will have exactly the same color balance.

Enhancing Details Inside the Comet

In this section we propose an easy but effective workflow to process comet images in the non-linear stage. We'll focus on the narrow field images:


We apply two instances of the HDRMultiscaleTransform (HDRMT) process. First with 5 layers and then with 4 layers, both processes with the Lightness mask option enabled. The result is quite aggressive (this aggressiveness will strongly depend on each image and each comet) but brings out all of the small-scale, low-contrast details around the nucleus:


Now, we can enhance the details of the ion tail inside the dust tail with LocalHistogramEqualization. These are the settings of the tool:

In this tool, the Contrast Limit parameter is very aggressive, so we'll have better control on the process by lowering the value of the Amount parameter. LocalHistogramEqualization lets us recover the large, low-contrast features of the comet inside its tail. These are the resulting images:


Finally, we show a mouse over comparison of the three processes applied to both images:


Conclusion

Comet imagery has its own unique challenges, especially if we want to perform an accurate color calibration. The workflows described in this article show a creative use of the available tools, which comes from two facts:

  • We have used LocalNormalization to correct the shadows of a line-of-sight interfering object. Thus, the connection between an observational problem and a specific tool in PixInsight lets us find its optimal software-based solution.
  • A logical, indirect connection between multiple tools in PixInsight lets us to perform the registration and photometry-based color calibration on starless images.

It is the exclusive design of the PixInsight platform what lets us to perform such unique and complex workflows, adapting them to the specific needs of each picture. The imaging platform should not pose limits to the creativity of the photographer. This has always been one of the basic principles of PixInsight development.