Author Topic: Trying Pixinsight: need help to finish processing to decide if buying!  (Read 4588 times)

Offline deadwing

  • Newcomer
  • Posts: 5
Hi all,

I'm a former user of Nebulosity + PS but recently decided to move to something more astro specific, and started a trial with Pixinsight!
I'm impressed so far, spent quite few days on it getting used to the workflow, I find it really great and the tools are very powerful. Most of all I find it very intuitive and the whole object oriented way is really fantastic.

My AP configuration includes a NikonD90 dslr on a 150mm achromat refractor, eq5 autoguided.

Problem I have is anyway that after reading and following several tutorials, no matter what I do and in what order (between linear and not linear processes and trying different procedures orders), I end up with images with the following problems that I'd like to learn to solve in Pixinsight:

1) I have a lot of color noise, that it seems I successfully tamed a bit using ACDNR on the linear image but still too much
2) I get coloured gradients and I have difficulty getting rid of it to produce a smooth but detailed image for background sky
3) being an achromat, I need to find a reliable way in PI as well to reduce star halos and size
5) even if I selected the correct debayer pattern for my Nikon, I always struggle with color balance (I get a heavy greenish image after calibration/stacking)

In the example attached is about 1hr of M101, in shots of 5 mins each at 1000 iso, plus 5 darks (no flats this time as it was a test). I also use a Baader Semi Apo, all 2". Obviously I know that taking flats for calibration, dithering and other stuff would improve the images in first place, but I suspect with the data I have I could get much more but it's just me not still able to use Pixinsight fully.

Considering that all my images have same problem, I'd like to know if someone can help me processing these so I can understand how to correctly color balance, most of all denoise the image, correctly stretch and darken the background and finally reduce stars halos, enhance/get some colours and correctly use deconvolution to bring out details.

The image I got at the moment is the one attached...that is the result with automatic STF and also stretched, and to arrive there I already performed ACDNR, several Background Normalisation and Dynamic Background Extractions. It obviously still presents noise, color casts etc.

If I could learn getting rid of these problems with my images using Pixinsight I'd buy it in a second, I really love the workflow but I need someone guiding me this first time :)

I prepared a folder with the original shots and the darks, and the first of the linear and non linear FIT images I got at the moment with my processing....

https://www.dropbox.com/sh/yuc6vjw4gbls38i/AACA9Q3KK-Tfr6uQ9ngLTurba?dl=0

If anyone can help me I'd be really grateful!  :)



Offline deadwing

  • Newcomer
  • Posts: 5
Going on with the work, I was able to create a luminance mask, then a mask for the stars, subtracted the second from the first and I obtained a quite good galaxy only mask.

Blurring both stars mask and galaxy mask I was able to darken the sky only without affecting stars and galaxy, denoise e deconvolve the galaxy a bit, some local constrast etc.

I also created two more star masks including only the bigger and bigger/bloated/CA stars, blurred and histogram it to exclude more smaller ones, and with this starmasks (in two rounds) I deleted the CA and reduced the size of the stars with the Morph.

In the end I obtained something like the attached...but it's far from perfect (even if almost 1:1 crop), still quite not defined, I don't understand if it's my PI processing or a problem of the images as they came from the DSLR.

Anyone expert that can attempt a different process from the RAW images and show me how it could be done? Similar result could be obtained in Nebulosity/PS, I was hoping to get more 'clean' images from PI?


Offline Torsinadoc

  • PixInsight Enthusiast
  • **
  • Posts: 98
here is a very quick background clean up.

I used an aggressive stretch to see the defects (duplicated the image, then stretched with HST)
I then DBE and manually placed the boxes and applied to the non stretched image.  I had to set the tolerance higher around 1.5 to get the corners and keep it from rejecting my samples.  I ran DBE a few times and adjusted down the tolerance (I don't  if that is helpful).  Then a quick background neutralization, col cal with galaxy preview for white then SCNR to remove green. Im sure others can do much better.  Here is a screen shot and the fit file of my changes

https://drive.google.com/file/d/0B1w8Nsl6Rq17T1VEOS1LdlRlTjg/edit?usp=sharing

I probably could have improved the box placement in DBE
« Last Edit: 2014 September 06 04:51:06 by Torsinadoc »

Offline Don

  • Newcomer
  • Posts: 47
My result is similar to your second one, but I didn't darken the sky background so much, and I didn't try to fix the bloated stars and CA.

My comments:

Quote
I have a lot of color noise, that it seems I successfully tamed a bit using ACDNR on the linear image but still too much

ACDNR works best on non-linear images.  I used MultiscaleLinearTransform on the linear image with a Linear Mask enabled, after BackgroundNeutralization and DynamicBackgroundExtraction.

Quote
I get coloured gradients and I have difficulty getting rid of it to produce a smooth but detailed image for background sky

Yes, this was a problem for me too - I think flats would have helped a lot with this.  I used two passes of DynamicBackgroundExtraction, the first with Target Image Correction set to Divide to help with the vignetting, and the second with Target Image Correction set to Subtract.  This left the central part of the image with a fairly uniform background, so I cropped out the edges:



Quote
even if I selected the correct debayer pattern for my Nikon, I always struggle with color balance (I get a heavy greenish image after calibration/stacking)

This is normal with DSLR images and doesn't indicate a problem.  BackgroundNeutralization (normally the first step performed on the integrated image) should take care of this.  Before background neutralization, unselecting the "link channels" button on the ScreenTransferFunction panel and doing an auto-stretch should get rid of the strong color cast on the display (ScreenTransferFunction doesn't change the data, only how it is displayed).  After BackgroundNeutralization, auto-stretching with and without the channels linked should yield about the same result with regards to the background - this would indicate that the BackgroundNeutralization process achieved a good result.

The StarAlignment process output indicated that your tracking was very good - only a couple of pixels total movement in DEC and in RA over the imaging session, and no significant rotation.  This is good because it means you could use much longer exposures, but it is also bad because it means the camera's read noise remains intact in the integrated image.  Dithering would have helped here. 

I think you would get a better image by cutting the ISO in half and doubling the subframe exposure times to 10 minutes for this subject.  There is no substitute for photons.  Also, dithering would have allowed you to use the Bayer Drizzle process in PI, which would have improved the resolution significantly.

In short, I don't think you can improve much on this image because of the lack of flat and bias frames, and because there just isn't a lot of signal there. 

Don


Offline deadwing

  • Newcomer
  • Posts: 5
Hi all,

thanks for your attempts and help, now things are much clearer. I suppose my main concern was about if the main problem was in something I did not get of processing, or more about the imaging itself and equipment.

Following your suggestions I obtained similar image to the second one I posted, so indeed I suppose that DITHERING and using lower iso (probably 800 or lower) for longer exposure.

I'm just not sure how to achieve Dithering: I use PHD1 guiding (I can use the 2 as well), and a Nikon so no computer control. How can I setup/perform dithering manually in PHD and what do I need to do later in PI to use these dithered subs to get noise averaged out and higher resolution? Doing this plus flats (that I usually do) I'm curious to see how much it would improve final result (I can post here once I do the test, possibly tonight if weather forecast is correct and I get to understand how to do it!

Offline Don

  • Newcomer
  • Posts: 47
All you need to do to achieve dithering is move the mount a little between subs.  Since you use PHD, but with no camera control software, you will need to stop guiding and adjust the mount position (using PHD to visualize the movement of the guide star), then re-engage guiding and acquire the next sub.  You can use the Manual Guide feature in PHD (on the tools menu), or use whatever application controls your mount (EQMOD?) to move the mount after stopping the guiding.  You want to move it enough to offset the image by a few pixels in X and in Y, then re-engage guiding.

PHD has a feature that automates dithering, enabled by selecting "Enable Server" on the Tools menu, but his requires a client application that can signal PHD when an exposure ends so that PHD can disengage guiding, move the mount and re-engage guiding.  I don't know of such an application that works with Nikons except for a beta application called BackyardNIKON, currently buggy I understand but may be worth a try.  If you want to try it go to the BackyardEOS home page, click on Support and follow the link to the user forum.  Search there for "backyardnikon" and you will find numerous messages with links to the beta download (I assume you would have to register but wouldn't need to pay in order to download the beta).  The developer (same guy who created BackyardEOS) is actively working on BackyardNIKON, so be sure to locate the latest beta if you want to try it.  I'm not sure what kind of connection BackyardNIKON requires between the camera and the computer - perhaps just the USB cable that normally connects the camera to the computer, or perhaps a different exposure control cable.  You should be able to find answers to these questions on the BackyardEOS forum. 

There may be other applications out there that can automate dithering with or without PHD that I'm not aware of.

You don't need to do anything different in PixInsight to take advantage of dithering.  To use the Bayer Drizzle procedure, you will need more than a handful of dithered lights (at least 10, more is better), and you should read this thread to fully comprehend the procedure first:  http://pixinsight.com/forum/index.php?topic=7184.0.


Don


Offline Alejandro Tombolini

  • PTeam Member
  • PixInsight Jedi
  • *****
  • Posts: 1267
    • Próxima Sur
Hi,
I have downloaded your m101_linear.fit for reduction of the halos. See here to follow the process.

Saludos, Alejandro.

Offline deadwing

  • Newcomer
  • Posts: 5
Hi Don,

indeed I did my researching this morning and I already got latest version of BackyardNIK and I asked to have the serial/usb cable I need to enable remote bulb on my D90, then it should be easy to make it all automatic with Backyard communicating with PHD.

In the meantime I'll try manual dithering between exposures, allowing 1/2 minutes for manual dithering, guide stabilising again (and I'll cool down the sensor as well) and next image. I also read that procesdure for Drizzling in PI so I'll make sure to take 10/20 exposures dithered.

I'll check that way if noise and snr are better, and if drizzling does anything good, if so then I'll proceed to the automated dithering and drizzling in PI!

Offline deadwing

  • Newcomer
  • Posts: 5
Hi,
I have downloaded your m101_linear.fit for reduction of the halos. See here to follow the process.

Saludos, Alejandro.

Thank you very much! REALLY useful, and happy if someone else will benefit from this, I suppose many cannot afford APO or ED refractors and will be happy to see such a clean way of getting rid of halos  :D