Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - gamempire

Pages: [1] 2
General / Re: Satellite streak: reject image or treat later?
« on: 2019 April 01 09:27:59 »
Hi Nicco!

With a faint satellite streaks such as the one in your image, I've not had the best luck with the streak being completely rejected when integrating a stack of images where I may only have 5 or 6 frames per filter.

Rick and Juan posted a great PixelMath expression to draw a solid white antialiased line wherever you have a satellite streak here: . It makes the rejection algorithm's job much easier.

d = d2seg(x1, y1, x2, y2 );
iif( d <= r, $T*d/r + ~(d/r), $T )

with declared symbols:

d, r = 2
x1,y1 are the pixel locations where the streak starts, and x2,y2 are the pixel location where the streak ends. Sometimes you'll need to increase the width of the line, so adjust R accordingly.

Now the one issue I ran into with this method is if I drizzled my data afterwards, I suddenly get the satellite streak back. When drizzling, the drizzle data file refers to your cosmetically corrected images, not the alignment images. So what I've had to do is draw the line on the cosmetic correction image, and then go through my normal alignment -> integration -> drizzle process. It will reject out during the integration, and then the drizzle data file will reject it when you drizzle.

The other option may cause a bit of degradation to SNR in the area of the streak, but sometimes it works when the other process wont. First, align two images before you start your workflow, make a mask with the line of the satellite trail that masks everything but the trail. Then apply mask to image with the satellite trail. Type the name of the image without the satellite trail in pixelmath, make sure replace image is selected, and apply it to the image with the trail. It will then take data from the image without the trail and only apply it to the area shown by the mask.

General / Re: Sharpening with MLT and deconvolution feedback
« on: 2019 March 21 21:00:33 »
Just a clarification.

When I said I tried the PSFImage script on a Linear image both with/without the STF applied, that was NOT referring to actually stretching the image, only using the STF to view the image on screen so that the stars could be seen.


Got it Steve, Thanks. The reason I asked the question about linear/nonlinear was because I looked up the post on the forum where the PSF script was first posted, and there was some discussion about using the script on both types of images. That threw me for a bit of a loop, because I know deconvolution is only ever applied on non linear images, while MLT can be applied to both. So thanks again for the clarification, and thanks for the feedback and help.  :D

General / Re: Sharpening with MLT and deconvolution feedback
« on: 2019 March 20 21:03:33 »
One more question for you or Rob: When building the external PSF for the stars, should I build a new one for each filter of the narrowband image, or can I safely get away with using the one I built from the Ha data?


Josh - For my images there can be quite a difference in the star sizes in the R/G/B channels so I need to generate a separate PSF image for each channel. Your situation may vary.

However, there is a new script that automatically creates the PSF image for you which makes it very easy.

The script is located at:


Make sure the image you want (Ha, Oiii, etc.) is selected then open the script.
Using the default settings, click on Evaluate. When the Evaluation is done it will show a PSF graph and a picture of the PSF.
Click on Create and it will instantly create a PSF image.

Really easy to use and saves a lot of work.


Well isn't that script just dandy! Thanks for that tip as well Steve!

One more question: should I use the script on a nonlinear or linear image to generate the PSF? I saw advice going both ways, but was just curious on what you do.

Edit: So I ran the script on the nonlinear image with the default settings and got what appears to be a good PSF. But when I tried to run it on a linear image, that had its STF applied to the HistogramTransformation then stretched, it didn't detect any stars with the default settings.

General / Re: Sharpening with MLT and deconvolution feedback
« on: 2019 March 19 21:38:40 »

Steve: Thanks for the tip on using the linear image for the mask. Do you think it makes sense to clip the blacks a bit so the darker areas get more noise reduction?

Josh - You can clip if you want to. The normal stretch done by transferring the STF stretch to HistogramTransformation normally works good. I forgot to mention that you need to invert the mask after it is applied so that the bright areas are protected while the dark areas get the noise reduction.


yep, already figured that out, thanks!

One more question for you or Rob: When building the external PSF for the stars, should I build a new one for each filter of the narrowband image, or can I safely get away with using the one I built from the Ha data?


General / Re: Sharpening with MLT and deconvolution feedback
« on: 2019 March 19 15:09:54 »
Hi Josh - I looked really hard to try and see the "ringing" problem that is bothering you. I still don't see it. As far as I can see you are doing minimal sharpening with your Deconvolution, so, I wouldn't expect ringing problems.

As for the noise reduction:
1) To avoid "wiping out" any sharpening make sure to use a mask to protect the brighter areas. For NB work you can just make a copy of your linear image, stretch it, apply it to your unstretched image as a mask.
2) Try several small Previews (high and low signal areas) to make sure you aren't wiping out your sharpening. This can save you a lot of time.

Hope this helps.


Steve: Thanks for the tip on using the linear image for the mask. Do you think it makes sense to clip the blacks a bit so the darker areas get more noise reduction? I've moved over to MLT instead of MMT for noise reduction, since it was still taking 15-20 minutes for a preview with MMT and I'm getting very similar results with MLT.

if you look at the areas inside the grey boxes, you can see some "worms" - bright, linear, slightly bulbous structures that are not present in the original image.

i think you may be able to avoid this simply by masking those lower SNR areas a little better. they do seem to be present in all of your different experiments - i have usually only seen them as deconvolution artifacts but i suppose they can be caused by any sort of sharpening.


Rob: thanks for explaining what was going on with a better "technical" term in 'worms'. Thanks to using that term with some googling, I was able to find a really good guide by Jon Rista that really breaks down and explains how deconvolution works at The issue was solved by simply adding a small value to global bright; I've posted 3 examples below with 100, 150 and 200 iterations with deconvolution. The only downside is that the guide is unfinished, and was only just beginning to explain regularization and using it to do noise reduction instead of doing it as an additional step. I did notice when I pushed it to 200 that I started to get a little worming again, so 150 is probably the sweet spot.

Thank you both again for your knowledge and feedback.


100 iterations

150 iterations

200 iterations

General / Re: Sharpening with MLT and deconvolution feedback
« on: 2019 March 18 13:39:08 »
To my old eyes even the 50 iteration Deconvolution looks OK. I don't see the problems about which you are concerned.

Looks like it's going to be a beautiful mosaic when you are done.


Thanks for the comments and feedback Steve, its greatly appreciated and helps reassure me that I was on the right track.

The issue I was concerned with on the 50 deconvolution was around the two "pillars" of gas towards the bottom right of the image. If you look around the smaller one thats in the 10 o'clock position of the pillar with the box around it, you can see an outline around that pillar of gas that looks similar to the ringing one would see from deconvolution in the stars.

Maybe I'm just overanalyzing it because its my first attempt at spending a lot of time to sharpen an image; it probably wouldn't even be very visible after a narrowband color combination unless one were really looking for it. And I hope it won't be visible when I print it (cropped) for my best friend's birthday at 4 feet by 8 feet.

If you and the community would be so kind, could you look at my MLT and MMT noise reduction below? It was applied after 40 deconvolution iterations. I used to use multiscale median transform for my narrowband images to really smooth out the darker areas that tend to be noisy in narrowband images, but I tried it (it took 2+ hours to run) and it really canceled out the improvements from sharpening.

Thanks again,

MLT noise reduction with stretched lightness mask

MMT Noise Reduction with stretched lightness mask

General / Sharpening with MLT and deconvolution feedback
« on: 2019 March 17 21:20:21 »
Hi All,

I've been trying to get the hang of sharpening my data a bit better for a 2x2 narrowband mosaic of the Carina Nebula I'm working on. Most of the basic processing workflow is down pat. My background noise reduction for narrowband images seems to be better than my LRGB images. Kayron's PixInsight guides are always wonderful, but I've been second guessing my results when it comes to deconvolution and MLT. This is the first real image I've spent a lot of time trying to learn how to sharpen, and I guess what I'm looking for is a bit of reassurance that these are the results I *should* be getting.

The one thing that I do want to share is that the stars in this hydrogen alpha aren't as round as they could be in two of the mosaic frames, but it isn't an issue with my OIII and SII data. I did give morphological transform a try to fix it, but because not all of the stars were elongated, I wasn't getting great results, so I figured to leave well enough alone.

I definitely see a positive difference between the original image and the MLTs and 10 iteration deconvolution, but as you may see in the areas I highlight in the last image, I'm a little worried about the what looks like de-ringing around dark areas that aren't stars, even in the lightest of sharpening.

Any feedback would be greatly appreciated  :D

I'm testing it out on the following preview area, which has a nice mix of dark, light, and stars.

This is the mask I'm using. Its a range mask with a star mask subtracted from it. Below it is the actual range mask minus the star mask
(but not the exact same size as this preview), and the full image with the range mask minus the star mask.

These are my settings for Multiscale Linear Transform Test 1

This is the result for Test 1

Multiscale Linear Linear Transform Test 2

Test 2 Results

Now here are my settings for Deconvolution, where I ran it through 10 iterations for my first test.

10 Iterations

And another test with deconvolution, with 20 iterations.

And here is 50, which is clearly overkill and starts oversharpening a ton.
However, I've highlighted a few of the areas that concern me in all of my tests.

Hi PI Community!

I'm trying to optimize my computer and PI settings to try and speed things up a bit, and was looking for some guidance. I've searched the forums here and elsewhere, and there wasn't much information on what the swap write speed should be, just the swap read speeds. I know benchmarks aren't a real world scenario, but I've got 4 benchmark runs to share (two with ram disks, two without).

Computer is a 15" Macbook Pro 2018, Core i9 @ 2.9 GHz with 6 cores, 32GB DDR4 ram. 2TB Apple SSD (my 2010 Macbook Air would have never been able to run PixInsight). Blackmagic SSD benchmark is about 2500mbps write speed, 2700 read speed, which is about where this machine should be from a benchmarking perspective. I know I won't get the best CPU performance on a laptop processor, but it beats my 2nd generation i7 quad core desktop machine from years ago in terms of raw horsepower. Nonetheless, I'm more than impressed with how the CPU performs in this machine.

Swap setup is eight swap directories in /tmp on the SSD, and two ram disks of 4GB each, which seems to be the consensus after reading a number of forum posts breaking down the best practices for swap. Attached are the four PI benchmarks, two with the ram disks, and two without. The transfer speed only drops by about 70-100 mb/s when I remove the ram disks, but I max out around ~1100 mb/s.

Looking at the console log from the benchmarks (with the ramdisks), the write speeds are all over the place, as low as 290 mb/s, and as high as 1100 mb/s. Read speeds are between ~4900 and ~7000 mb/s.

My read speeds are pretty great, but I'm wondering if anyone can comment or advise me if my write speeds are within reason given the machine's specifications.

Thanks much!

General / Re: DrizzleIntegration using the wrong source images?
« on: 2019 February 13 16:53:46 »
well i can think of two ways out of this:

first is that since you only have one image with the satellite, rather than trying to repair it, just overwrite the streak with 1.0 or 0.0 and then it can be very easily rejected by the normal pixel rejection techniques. the SNR under the streak will be a tad lower, but maybe not perceptible.

the other way would be to do exactly what you are doing, but register one of the good images to the bad image to use as source material for the streak. then just throw away the registered one, and start the whole process from there, as though your bad image was clean to begin with.


The first option will probably be best, I didn't even think of that, so thank you. The second option requires a few more steps to just get one frame of good data, and if I see that the SNR is awful, I'll go with that. In this case, the satellite trail went right through CentaurusA, so keeping the SNR high would be best.

Just a thought though......Are you able to remove the trail prior to star alignment ? If you can....surely doing that, then registering all 6 files (including the 3rd image with a corrected satellite trail) would result in a stack without the trail ?

I find it helpful to have a seperate folder of files for each step in calibration. In my registered folder I have the files that have been calibrated/cosmetically corrected/debayered/approved&weighted and registered (ie. targetname_c_cc_d_a_r.xisf), my drizzle files and my local normalization files. When I integrate the files from this folder PI does an excellent job of removing any stray artefacts that may be left over from cosmetic correction. This updates my drizzle files and I then do a drizzle integration.

Once I've got a Master that I'm happy with I go back and delete all my files up to the point before they're registered. This leaves me with one set of calibrated/CC'd/debayered and approved files that I back up onto an HDD to use later if I do another imaging run on that target.

Dunno if that helps but as I said.....just a thought  :)

That was sort of the gist of Rob's idea; to register the satellite trail image to another image, and then do my rejection, and then go through and re-register the entirety of the images. Thats a couple additional steps when his first idea of just drawing a solid white line will let the rejection algorithm take care of it in one fell swoop.

Regards to the separate folders, thats exactly what I do, and thats why it was so easy to see that it was pulling the cosmetically corrected files. I do cc (cosmetic correction), alignment (star alignment), integration (image integration), drizzle (drizzle integration, and I also crop the images here with DyanmicCrop) dbe (dynamic background extraction), linear_fit, and then rgb folder where I stretch the images and do my LRGB combinations. Yes it does take up a lot more space, but data storage is relatively cheap these days, even on an SSD. And it makes it much easier to go back to various steps when I need to, especially when I go back to work on a project that I've spent a month or two away from. Everything is stored in a folder that syncs with Dropbox. I just wish I was better at documenting my steps for each project, but I'm getting there.

There is a 3rd way Rob......Don't include the sub with the Star trail.....

Sorry....not helpful but couldn't resist  ;)

If only! If I had at least 10 frames per filter, that would be easy, but I usually only have 4 to 6, so every frame counts.

Thanks everyone for the help and ideas.

General / Re: DrizzleIntegration using the wrong source images?
« on: 2019 February 11 18:51:13 »

Here's the process I'm using to fix the satellite trail. I can't fix the cosmetically corrected image until the images are registered to each other. I understand that the drizzle data files have the image registration in them so they can use the cosmetically corrected files. Basically, I feel like I'm in a catch 22: I can't fix the satellite trail without registering the images, but when I fix the satellite trail, I can't use DrizzleIntegration for the repaired image.

General / DrizzleIntegration using the wrong source images?
« on: 2019 February 11 15:53:43 »
Heya everyone,

I've got a weird problem, and maybe I'm misunderstanding how the DrizzleIntegration process works. Let me set up the scenario.

I have six images for CentaurusA in green, the third image has a satellite trail. All of my images are dithered.

1. Cosmetic Correction
2. Star Alignment (registered and drizzle files generated)
3. ImageIntegration
4. Drizzle Integration

I generated a line that is 5 pixels wide using 5-d2seg(15,869,3071,1339) in pixelmath, and applied it as a mask to the third image. It starts at 15 because the image is already registered with star alignment. I then opened the second image, and applied it in Pixelmath to the third image. Satellite trail is gone and replaced with data for the line from the second image, and I'm back to having six good images for my green data.

I then integrate the data using ImageIntegration, and I made sure generate drizzle data is selected to update the drizzle files. I then move on to DrizzleIntegration, and thats where the issue is. When I go to integrate the green data, I'm back to having a satellite trail in the image. I checked the process console, and noticed that it is using the cosmetically corrected files located in a different directory and not the star aligned files for the image. I opened up the xdrz drizzle data file, and saw that the file name ends in _cc, instead of _cc_r. And I'm sure the drizzle files are being updated when I run image integration, as they are the most recently modified files.

Am I misunderstanding the way DrizzleIntegration works and the files it uses? Did I miss a step in updating the drizzle files to use the star aligned images (it doesn't seem so)?

I've attached 3 screenshots of the basic star alignment, image integration and drizzle integration settings I use. If there's any other data (drizzle files or console logs) that might be helpful in solving the problem (if it is indeed a problem), I'd be happy to provide it. If I'm being dense and just missing a simple setting or step, please just point and laugh at me while I bang my head on the desk (but also fill me in what I was missing, please).

Thanks for the help!


Quick edit: the reason I'm using the line masking method is because image integration is not rejected the entire satellite trail on my green data, but was completely successful on rejecting it on my luminance data.

General / Re: Local Normalization FixZero error
« on: 2018 October 25 18:47:41 »
Again, thank you for the advice rob. I've added it to my notes.

Just one more question (well, two) about Local Normalization if you'd be so kind. I've attached three screenshots of an Ha, SII and OIII integration for m42. The images were autostretched. Local normalization was used (scale was 512), and the reference frame was an Ha image (instead of doing LN for each filter). As you can see, the SII and OIII images are blown out in the center.

Say I have my narrowband images in Ha, SII and OIII. If I were to use Local Normalization in the future, should I use LN with the brightest overall image as the reference image for all the data? Or should I normalize separately for each filter?

Thanks again to everyone for the help and advice. Still wish I could figure out whats wrong with those OIII files.

General / Re: Local Normalization FixZero error
« on: 2018 October 25 15:53:17 »
Thanks to both of you for the advice. I actually have 23 OIII frames at 120s per frame (I had to discard  9 of the original 32).  I only put a few frames in so someone could maybe check them to see if there was an issue that I was missing, perhaps in the fits header data. I just put all the OIII frames in that dropbox folder if you don't mind taking a look again Oldwexi.

Also, can I use DBE without doing a local normalization?

I've gotten some pretty great results with the other telescopes, it was just T5 was giving me a number of issues with tracking anything over 120s, and the flats were old which caused the NGC 281 data to not be so great. Everything else in terms of precalibrated images from the other telescopes has been great, and Pete over at iTelescope was happy to try and recalibrate the data I had from T5 after he redid the flats.

I attached a shot of M16 I did in August and September using T30, and while its not sharp enough to print (I'm going to work on learning how to mask nebula next month to be able to sharpen dust lanes and such), I'm still very happy sharing it. It was the first image that I processed almost completely in pixinsight (there was some clone stamping done in photoshop to clean up some hot pixels, which was easier than using the clone stamp tool in Pixinsight).

General / Re: Local Normalization FixZero error
« on: 2018 October 25 11:09:37 »
Here's a dropbox link to some of the OIII frames:

My NGC281 project has been a total mess it seems due to darks and flats being bad and needing a bunch of recalibration. I did a quick HST color combination using PixelMath and got a horrible gradient. Might just scrap the data I have and start over :-\

General / Re: Local Normalization FixZero error
« on: 2018 October 25 10:26:39 »
I'm using precalibrated lights, so I'm unsure as to why the Ha and SII data is fine and the OIII isn't.

And while I'm no stranger to astrophotography, the reason I've been using iTelescope is because I can't setup my 9.25" SCT anymore after 2 shoulder surgeries, and don't have a location to build a permanent structure for it at this time. iTelescope allows me to continue my DSO astrophotography until I'm able to do so :)

Thanks for the tip about their calibration files being incompatible with PixInsight. I used to use Maxim to control my own setup, but switched to PixInsight for processing since its Mac and Windows compatible and I'm on the road a lot with my mac laptop. When running through the tutorials originally, I wanted to learn how to do all the preprocessing, and couldn't figure out why I was getting poor results with the master darks/flats/bias files provided by iTelescope, so I just decided to stick with the pre-calibrated files in my workflow.

But I'll skip LN for the foreseeable future. Coincidentally, I couldn't figure out why I was getting blotchy spots using DBE on another target I was working on, and ran across a post this morning on the forum that explained it was due to Local Normalization.

Pages: [1] 2