Combining multiple nights of data with different flats

Jkulin

Well-known member
Hi,

I wonder if I can ask for some advice?

I have nearly 20 hours of data in LRGB taken over three nights (23rd and 24th March and 14th April), The first nights were at 300sec durations for the lights with different flats but the same bias and darks, the final night was at 600secs with different flats and different darks but he same bias's.

All take with my Moravian G2-8300 CCD Mono with my 10" RC

What would be the best way to combine all the data: -

1. Should I stack each night and create masters of each band?
2. Can they all be stacked in WBPP and let it churn out the results?

Is there a better way within PI to do this.

I can't use the same flats as I have experienced problems before with dust bunnies varying, so thus the reason for new flats on each night.

I have created Master Darks at 300s and 600s and a Superbias.

Any assistance would be appreciated.

Many Thanks.

Keep Safe,

John
 
in short you need to make multiple runs. however, using the "add custom" feature of WBPP you can make up filter names for each night, adding both flats and lights under those filter names for each night. you will end up with a bunch of different masters for each real filter, but then you can use ImageIntegration to just integrate all those masters.

rob
 
Thanks Rob, it took a while but churned away and produced good masters.

The only problem I have is that with the 300s Lum I have a nasty dust bunny, but all the other files are really clean.
 
Last edited:
i hate it when that happens... if it is only one i suppose you might try modifying the flat for that night, or try using StarHaloReducer on the integration, if the dust bunny isn't too smeared out.

rob
 
Unfortunately Rob it is a horrible one and I honestly don't know where it came from, there's a bunny and an artifact, the artificat is the real issue: -
1588022863197.png
 
yeah that is bad. it looks like one of them moved since there's a bright one just below the galaxy cluster.

that is going to be a nice image though!
 
Thanks Rob, I've ended up losing 42 x 300s subs as they just didn't look right, first attempt is a bit brash, so will work on it more this week for some subtle transitions.

The image integration and SA went perfect, so was pleased with that.

Much appreciate your help.
 
Concerning the dust bunny... it sure seems like a good candidate for the Selective Rejection technique. You can simply put the 600s exposures' data in those dust motes. Since it is on the background sky mostly, I suspect the slight difference in S/N wouldn't be noticable..and it would be real data there. Is this something you would be interested in?

-adam
 
Concerning the dust bunny... it sure seems like a good candidate for the Selective Rejection technique. You can simply put the 600s exposures' data in those dust motes. Since it is on the background sky mostly, I suspect the slight difference in S/N wouldn't be noticable..and it would be real data there. Is this something you would be interested in?

-adam

Hi Adam,

Indeed thank-you for your reply, like most of us we hate losing hard won Data, this is a quick screen capture of the 600s, with nothing done to it: -

1588075003086.png


So how do I use the selective rejection technique?

Thanks for your help.

Regards,

John
 
I image with a stock DSLR and a camera lens. Sometimes, during the night, a spider (the insect, not the mirror-holder:)) decides to crawl over the lens aperture leaving a shadow on my images not corrected with new flats. (This is my best theory so far for the "transient dust bunnies" I've seen on some of my images:p) [Edit: I can't confirm this].

If I have "clean" data from the same part of the sky, I can remove the shadow by creating a "synthetic flat" using my "clean" data. Here's what you can try:
  • Let clean be your dust-free integrated image and target your problematic one.
  • Align the clean to target by using StarAlignment. Now you have a clean_registered image.
  • Create a new synthetic_flat image by using the following PixelMath expression:
synthetic_flat = target/clean_registered * med(clean_registered)/med(target) - 0.5
The med() term is a simple solution to compensates for signal differences between the images.​
The 0.5 term tries to keep the result within the [0,1] range (see next step).​
  • From your synthetic_flat remove the first* layers with MultiscaleMedianTransform.
  • By using the following PixelMath expression, create the target_corrected image,
target_corrected = target/(synthetic_flat + 0.5)
  • Enjoy!
(*) You'll have to experiment in order to decide how many layers to remove.

If your dust donuts are isolated and well-defined, you may apply the correction by using a mask, revealing only the affected parts of your target image. This will reduce the SNR damage introduced by the application of a synthetic flat to the whole image.

I took the liberty to correct your image given the .png images posted here:

SynthFlatForTransientDust.jpg


I am pretty sure there are more correct ways to normalize the images and to keep everything within the [0,1] range, but for the moment, I'm happy, and I always welcome suggestions for further improvement :)!
 
Last edited:
Thank-you so much Dld, during this lockdown this will give me something more to learn.

I've not used the MultiscaleMedianTransform so that is something new for me to experiment with.

I truly appreciate your time and help.

Clear skies and stay safe!
 
You are welcome!

I've attached the necessary Process Icons for your convenience. Unzip it and right-click on your PI desktop to import them by selecting Process Icons > Merge Process Icons.

The MultiscaleMedianTransform was used to remove the fine details of the synthetic flat. For your png images I've removed the first three layers. For the real-sized images you may have to remove more. Experiment, and let us know!

Clear skies and stay safe too!
 

Attachments

  • SynthFlatForTransientDust.zip
    1.8 KB · Views: 79
Back
Top