New tutorial: Multiscale gradient correction

vicent_peris

Administrator
Staff member
Hi all,

I just published a new tutorial: Multiscale gradient correction. This article gives an in-depth description of a novel method for correction of light gradients in astronomical images. This new method uses observational data to deliver much more reliable and accurate results than any software-based solution, especially in images of low surface brightness objects contaminated with strong and complex gradients, as well as deep images without free sky background areas.

The article includes a detailed description of the new gradient correction technique, a step-by-step implementation in PixInsight, several practical examples, and important conclusions. I hope this material will be of the maximum interest for you all, especially because the new method has the potential to introduce a radical change in the way we face these important image processing problems.


Best regards,
Vicent.
 
Hi Vicent,
great idea! Dynamic background extraction works extremely well, but I agree measuring the background is even better. I'll give it a try next time, second camera is already installed ;). I think it will also help for mosaics where a flat background is essential.

With the Indigo frame acquisition module it is possible to control two cameras simultaneously, if each camera is controlled in a separate PixInsight instance. However only in server upload mode as I found out, I'll open a bug (to myself ;)).

Thanks for this great tutorial!
Klaus
 
Hello Vicent,

I'm fairly new to AP (2.5 yrs), and trying to improve my techniques. I use PI for almost all my preproc and processing. Your new method of gradient correction interests me greatly; I live and image in a Bortle 6 zone with much light intrusion from streetlights, neighbors, etc, so I spend a LOT of my processing time applying DBE to my images, with varying degrees of success. Therefore, I wish to attempt your new method, but will probably need to purchase some new gear to do so.
I have several questions to ask you, if I may, realizing that this method is still in its infancy :

1) How much larger than the T1 image should the T2 image be? 3x? 5x? My current FOV is 1.1x0.8 degrees.

2) What kind of quality should I be aiming for on the T2 image? This will dictate my choice of equipment to purchase.

3) Would it be feasible to use a guidescope/cam mounted piggyback on my OTA to obtain the T2 subs? I presently use a guidescope/cam with PHD2 for guiding, but I have an OAG that I've never installed which I could install in my camtrain, freeing up the guidescope position.

Any guidance you can provide will be greatly appreciated. Many thanks ahead of time!

Clear Skies!

Mike
 
Last edited:
Good day, Vicent,

Another thought related to your new method....

If I were imaging a small planetary or other kind of nebula, and I could live with a final DSO image fov of roughly 1/6th (0.16x) the size of the maximum practical usable image obtained from my stack, I could call that maximum usable image T2, the 0.16x image T1, and use your new gradient correction method on them. I could make it a part of my standard processing procedure, when it is applicable.

Couldn't I? Hmmmmm

Clear Skies!

Mike
 
Hi Vicent, thanks for sharing this workflow as it is an elegant implementation of an idea I independently arrived at due to the fairly extreme gradients caused by streetlights near my observatory. Since we are only interested in the very low spatial frequencies I had wondered if the technique might work even better if the wide field telescope was significantly defocused, and the background correction applied for each subframe?
Regards, Andrew
 
Hello Vicent,

I'm fairly new to AP (2.5 yrs), and trying to improve my techniques. I use PI for almost all my preproc and processing. Your new method of gradient correction interests me greatly; I live and image in a Bortle 6 zone with much light intrusion from streetlights, neighbors, etc, so I spend a LOT of my processing time applying DBE to my images, with varying degrees of success. Therefore, I wish to attempt your new method, but will probably need to purchase some new gear to do so.
I have several questions to ask you, if I may, realizing that this method is still in its infancy :

1) How much larger than the T1 image should the T2 image be? 3x? 5x? My current FOV is 1.1x0.8 degrees.

2) What kind of quality should I be aiming for on the T2 image? This will dictate my choice of equipment to purchase.

3) Would it be feasible to use a guidescope/cam mounted piggyback on my OTA to obtain the T2 subs? I presently use a guidescope/cam with PHD2 for guiding, but I have an OAG that I've never installed which I could install in my camtrain, freeing up the guidescope position.

Any guidance you can provide will be greatly appreciated. Many thanks ahead of time!

Clear Skies!

Mike


Hi Mike,

This is a new technique, so I'm still experimenting. I've worked with data from smaller telescopes with good results. Here you have the details:

- The first configuration is a C9 at f10 (2300 mm focal length) as a primary telescope. FOV is 40x32 arcminutes. The secondary telescope is a 180 mm f2.8 lens working at f4 with an APS size DSLR camera, so its FOV is about 7x5 degrees.

- The second configuration is an FSQ with a FOV of 3x2 degrees. The secondary telescope is a 100 mm f2 lens working at f2.8 and an APS size DSLR camera, so its FOV is about 13x10 degrees.

Two things to take into account:

- You don't need a good image quality in T2. Open your lens because you need to acquire the faint signal of the diffuse objects. In the second configurations, we tried to shoot the lens at f5.6 but the results were significantly better at f2.8 even if the stars are bigger and there is chromatic aberration.

- Given that you shoot with a fast f ratio in the wide angle lens, you can have enough signal from the faint areas that appear in the T1 image, though you don't have any of the small details. That's OK because we're going to use only the large-scale structures of both images to calculate the gradients. In the second configuration, we exposed half the total exposure time of the T1 image and it worked well.

- I think you can use the guidescope if your camera have the required FOV. If not, a solution could be to put a cheap DSLR in the guidescope.

- The first configuration was working under polluted skies (18.5 SQM mag). It worked really well.


I'm planning to write a second part of the article describing the methodology to process these images from smaller telescopes. For the moment, I think this can give you an outline of what equipment you need.


Best regards,
Vicent.
 
Good day, Vicent,

Another thought related to your new method....

If I were imaging a small planetary or other kind of nebula, and I could live with a final DSO image fov of roughly 1/6th (0.16x) the size of the maximum practical usable image obtained from my stack, I could call that maximum usable image T2, the 0.16x image T1, and use your new gradient correction method on them. I could make it a part of my standard processing procedure, when it is applicable.

Couldn't I? Hmmmmm

Clear Skies!

Mike


Hi,

No, I don't think that would work. The method is based on using two different optics because most gradients come from reflections inside the optics or flat field imperfections. So you need a second optics with a shorter focal length because in those optics the reflections and flat artifacts will be of a much larger scale, remaining negligible at the scale of the entire FOV of the longer focal length optics.


Best regards,
Vicent.
 
Hi Vicent, thanks for sharing this workflow as it is an elegant implementation of an idea I independently arrived at due to the fairly extreme gradients caused by streetlights near my observatory. Since we are only interested in the very low spatial frequencies I had wondered if the technique might work even better if the wide field telescope was significantly defocused, and the background correction applied for each subframe?
Regards, Andrew


Hi,

No, I don't think it would work. Think that this is similar to apply a big gaussian convolution to the image. In the article, we use MMT because it's much more effective at removing the small-scale components. Defocusing would smear the light of the stars, contaminating the light from the background areas.

Best regards,
Vicent.
 
Can this technique be used with the images from sky survey?
If so, then is it possible to have a process similar to photometric color calibration like "photometric background model"?
 
Can this technique be used with the images from sky survey?
If so, then is it possible to have a process similar to photometric color calibration like "photometric background model"?

Hi,

No, it won't work. The photometric errors of the catalogs are usually much bigger than those of the gradients. Modeling a gradient based on a catalog will generate stronger gradients. I already tried this.

V.
 
  • Like
Reactions: k
Hi Mike,

This is a new technique, so I'm still experimenting. I've worked with data from smaller telescopes with good results. Here you have the details:

- The first configuration is a C9 at f10 (2300 mm focal length) as a primary telescope. FOV is 40x32 arcminutes. The secondary telescope is a 180 mm f2.8 lens working at f4 with an APS size DSLR camera, so its FOV is about 7x5 degrees.

- The second configuration is an FSQ with a FOV of 3x2 degrees. The secondary telescope is a 100 mm f2 lens working at f2.8 and an APS size DSLR camera, so its FOV is about 13x10 degrees.

Two things to take into account:

- You don't need a good image quality in T2. Open your lens because you need to acquire the faint signal of the diffuse objects. In the second configurations, we tried to shoot the lens at f5.6 but the results were significantly better at f2.8 even if the stars are bigger and there is chromatic aberration.

- Given that you shoot with a fast f ratio in the wide angle lens, you can have enough signal from the faint areas that appear in the T1 image, though you don't have any of the small details. That's OK because we're going to use only the large-scale structures of both images to calculate the gradients. In the second configuration, we exposed half the total exposure time of the T1 image and it worked well.

- I think you can use the guidescope if your camera have the required FOV. If not, a solution could be to put a cheap DSLR in the guidescope.

- The first configuration was working under polluted skies (18.5 SQM mag). It worked really well.


I'm planning to write a second part of the article describing the methodology to process these images from smaller telescopes. For the moment, I think this can give you an outline of what equipment you need.


Best regards,
Vicent.

Vicent,

Thank you for the reply, and the answers. I am still researching the eqpt I can afford that will allow me to acquire the T2 images with a piggybacked guidescope. My current guidescope (240mm FL, f4) is difficult to match up with a camera that I can afford right now to obtain a large enough FOV. It is good to know that I won't need high quality T2 images, and that I should be able to use less total T2 exposure than my T1 images.

I look forward to the next part of this tutorial.

Clear Skies!

Mike
 
Hi,

No, I don't think that would work. The method is based on using two different optics because most gradients come from reflections inside the optics or flat field imperfections. So you need a second optics with a shorter focal length because in those optics the reflections and flat artifacts will be of a much larger scale, remaining negligible at the scale of the entire FOV of the longer focal length optics.


Best regards,
Vicent.
Vicent,

Thanks for the reply! I might still try the tactic I asked you about, when I have some spare time, and if I can find a data set in my archive that will allow me to try it out. I'll never know till I try, I quess, but I won't hold my breath that it will work, or work well enough to be worth the trouble.

Clear Skies!
Mike
 
Vicent,

Excellent tutorial.

I was wondering if a tool could be made that uses a library of "off-world" space probe backgrounds similar to what Alex Mellinger did with his awesome Milky Way panoramic. He states the following, "The fields were photometrically calibrated using standard catalog stars and sky background data from the Pioneer 10 and 11 space probes.". Given all the space probes out there now, one could generate a nice gradient-free image of the entire sky. Granted, there could be some parallax differences, but no sweat for the excellent capabilities of PixInsight. Any thoughts?

Wade
 
Last edited:
Would this work if you were to shoot an object with a CLS/Light Pollution/Tri/dual band filter and then shoot it with broadband and use the less gradient background from the light filter object?
 
Back
Top