Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - dpaul

Pages: [1] 2
1
General / Release 01.08.06 - 2 versions?
« on: 2018 December 22 18:28:38 »
Last week I downloaded the new 01.08.06 version of Pixinsight (01.08.06.1447). Now I saw a new notification saying version 01.08.06.1448 telling me I need to repeat the whole process again.
I can't see any reference on the form regarding two versions of 01.08.06?


Is this a fix to some problems with the 1447 version?



David

2
General / LAST UPDATES?
« on: 2018 December 14 16:29:09 »
Hi

Just wondering what the most update was?  My last one was June 2018 and each time I've checked since its says I have the latest version. I've had PI for about 1 year and in the early days thee were regular updates.

Thanks

David

3
General / Benefit of Drizzle?
« on: 2018 October 03 17:58:25 »
Hi

Can someone give me advice on whether adding drizzle files during integration will ''in my case'' help?  I have a 30'' F3.5'' scope which when used with a Paracorr means the focal length is just over 3 metres.
I'm using a CMOS camera with 3.8 pixel size and using 2x2 binning. I know this is seriously over-sampled but I'm generally happy with the results, aided by the superb optics from Mike Lockwood.

So here's my question - I think I've seen Juan's original announcement of adding Drizzle in 2014 and there may have been comments where it helps with ''under-sampled'' images. So in my case would I expect any benefit?  I'm still confused about what drizzle is really doing but doesn't it make the image less ''grainy''?  I tried redoing an image for the very first time by adding drizzle files (there were plenty of frames, 25 in this case and I used linear rejection during image integration). It did seem to make a small difference, a little less noisy maybe?

Any comments from experienced users are very welcome. Sometime I use 3x3 or 4x4 binning with narrowband so would that be more beneficial then?

Thanks

David

4
Gallery / IC 5146 (Cocoon Nebula)
« on: 2018 August 07 16:05:30 »
Attached image was taken with an Atik Horizon CMOS camera - LRGB filters.
Scope is 30'' F3.5 with Lockwood optics on an 'unguided equatorial platform'.
Each frame was 25 seconds and about 20 frames integrated per filter.

Technically speaking with a 105'' focal length and 3.8 pixel size this is seriously over-sampled. I used 2x2 binning which helps reduce star bloating.

David

5
General / STAR ALIGNMENT DIFFICULTY
« on: 2018 June 27 14:21:08 »
Hi,

I've been taking narrowband frames of M57 and also using a 2x Powermate to increase the image size. The frames are good but there are not that many stars in the field but still enough (I hoped) for star alignment. However, no matter what settings I use, I can't get star alignment to work, or at best I got about 20% of the frame sto align.


What is best settings for the most difficult cases - the other probelm is that HA and OIII by their nature also reduce the number of visible stars.


It all works without the powermate but not with it.


Is there a way to manually selct the reference stars?

David

6
General / ACCESS VIOLATION ERROR
« on: 2018 June 17 16:36:07 »
Hi,

This is a well documented topic on the forum.  I have the latest update of PI and use a Dell 30 MB Ram / I7 laptop with Windows 10.

I've had the ''access violation'' error come up on and off over the last 6 months. Some times I can use PI on several occasions and I never get the problems. Other days (like today) is happened about 8 times.


As already documented by others, it does help if PI is properly closed down after use (not just using the 'X'). Even doing this I can still get the error part way through a PI session.


I wanted to share my experience because there are 2 distinct processes which tend to cause this problem for me, they are ''dynamic crop'' and dynamic background extraction''.  Is it possible there really is some kind of bug?

It would be useful to have a survey of PI users to ask how many have had the problem, how often and any trends in when it happens.


I love PI so apologies for this note but it is very very frustrating - I hope there is a solution within PI itself?


One question - recent PI updates only were minor ones, wasn't there meant to be a major one soon?
The version I have is 01.08.05.1353

Thanks

David

7
General / Arcsinhstretch vs Histogram Transformation
« on: 2018 March 02 19:46:03 »
Just wanted to share my first attempt at reprocessing the same data of M99. The one that has more 'blue' was using Arcsinhstretch and Multiscale Linear Transform for noise reduction. The other was using Histogram Transformation and TVG Denoise.

In general the arcsinhstretch was processed as follows:

1/. Integrated R,G,B and L were separately deconvoluted, noise reduction then MT for star reduction - all using a galaxy mask (inverted and non-inverted) and a star mask.
2/. Combined R,G, and B into an RGB image using channel combination
3/. Background neutralised the RGB image, color calibrated it and used DBE (with a galaxy inverted mask)
4/. Combined the RGB image and L image by dropping the L onto the RGB using LRGB Combination
5/. Final bit of tweaking the saturation and background darkness using curves (with a galaxy mask, inverted and non inverted

I'm still a novice but just to show the same data can be processed better. I found the MLT easier to use for background noise reduction than TVGdenoise and less chance of a bad result. Also arcsinhstretch seems to give better colors than histogram stretch.

Don't know if I did things in the right order but I'm reasonably happy with the result.

(all images taken with Atik Horizon, 30'' Lockwood optics dobsonian, F3.5)

Thanks

David

8
I've been using TVG denoise to remove the graininess of the background sky. Lets assume for example I have an integrated set of light frames but its still linear and to protect a galaxy structure's sharpness.

So I create a clone and stretch the clone/do background noise removal to get a dark sky background, then invert to create a mask. I then use this mask to protect galaxy structures from being softened by the denoise process.

I'm using only mild settings - strength, edge protection, smoothness all at 2.0.  The background gets smoothed out nicely but I then notice faint contour lines around the galaxy.

How can I avoid this. am I doing something wrong?  Also is there another way of getting what I want with a different process?

Thanks

David


9
Can someone help with this general question-

I am ok with calibrating light frames using Superbias and a Master dark (without extracting bias) then using both in ''Image Calibration''. Sometimes however it warns me that the optimisation threshhold may be too high when using the default of 3. This seems to occur with luminance frames which have more saturated sky background - moving the threshold down to 2.0 gets rid of the warning message.

So here's my question - the higher the threshold, the darker the background of the calibrated frames becomes. At 10 fer example the result seems much better. So I'd like to know why this is not good practice?

Thanks

David




10
General / 2x2 binning - one shot color
« on: 2018 February 21 04:07:26 »
I have a fundamental question about imaging processing with a one-shot color CMOS camera. When saving RAW fits files using 2x2 binning, I'm then having problems debayering the results to get any strong color/right color. I'm wondering if this is because of the binning. I haven't tried 1x1 yet but will do so when its next clear one night.

So fundamentally, can Pixinsight support processing of 2x2 binned frames?  This is software binning with CMOS so can the debayering process handle this somehow? It must be screwing up the CFA data and explains why it can't autdetect CFA, I have to do it manually.

I have an unguided scope so it would be really beneficial if I can use 2x2 binning with OSC. Also for my scope its already very over-sampled with 2x2. I always use 2x2 with mono cameras and of course that's not an issue.

If this isn't currently possible with Pixinsight, it would be a great addition if that can be done.

Thanks

David




11
General / Additive Stacking?
« on: 2018 February 20 16:47:29 »
One more question -

I have a large scope (30'') and the Atik Horizon CMOS mono camera - this is great camera for short exposures with low noise.
My scope is unguided on an equatorial platform so I can usually get up to 20 seconds decent tracking. With my large aperture this perfectly adequate for LRGB frames.
I've also tried with narrow band and results aren't too bad, managing to push the limit to about 25 seconds, just, but more would be nicer.

So here's my question -
I appreciate additive rather than average stacking when integrating frames is not a great idea from a S/N perspective. However if such a method is possible when image processing then I could get better results with these short frame lengths. With video astronomy additive stacking is quite common (great for outreach) but can Pixinsight so this in post processing?  If not I still think it could be a useful tool - hoping its possible!

Of course another option is 4x4 instead of 2x2 binning but I prefer the 2x2 pixel resolution.

Thanks

David


12
General / Color-Splitter type CMOS Chip vs RGB type
« on: 2018 February 20 16:34:48 »
Hi

I was reading about the benefits of using a color-splitter for CMOS chips to increase sensitivity, see link below:

https://petapixel.com/2013/02/05/panasonic-doubles-color-sensitivity-in-sensors-with-micro-color-splitters/

However, I'm presuming this means the traditional CFA pattern such as RGGB, BGGR, etc no longer applies. So here are a few questions for my interest:

1/. Can a color splitter chip be used for astro-imaging
2/. Assuming the answer to 1. is a yes, can Pixinsight handle the data in a debayering process?  If the answer to 1. is no then maybe the concept of debayering doesn't exist with color cameras that have color-splitter chips?

Thanks

David







13
General / Color Calobration - Incorrect Color Outcome
« on: 2018 February 18 15:35:51 »
Hi

I'm having a frustrating time with color calibration today. This is using a one-shot color camera (Atik Horizon) and LRGB filters.

I've only recently started LRGB imaging and as can be seen from posts I did in the gallery section last week, in general my first attempts have shown reasonably good color balance.

That was until last night!

I was taking light frames of NGC 3718 and the resulting combined RGB image is too green. This galaxy should be more blue with a gold/yellow bar across it but that bar is green and not much evidence of blue.  I have calibrated each light frame with darks, bias and integrated them into separate R,G, B and L master ''linear'' images. I've then done star reduction using MT and deconvolution - all with use of star mask and luminance masks. Then I used channel combination to combine the ''still linear'' R,G, and B.

The resulting background when stretched was green (but I've seen that before in tutorials-I'm assuming that's ok?). Next step was to color calibrate so I first did background neutralisation which removed the green, then I used color calibration with two preview images (background without stars) and a bright star for whilte balance. The result was too green, nothing like images I've seen on the internet of this galaxy. I also tried photometric color calibration which at first worked then when I tried again later it refused to - but this still gave a similar result.

After stretching the RGB image with HT and increasing saturation with curves, I still cannot see anything but a green bar across this galaxy.

Must be doing something wrong?

David








14
General / Debayering One-Shot Color data
« on: 2018 February 17 02:00:51 »
Can someone assist -

I've just started using the ATIK Horizon color camera (already have the mono and having good results with LRGB Imaging).
Im having problems converting the one-shot color data from raw to color via debayering.

The sensor is a Panasonic MN34230 4/3" CMOS.

This is the order of play which I 'think' is the correct procedure:  I calibrate the raw light frames then debayer each frame. Then I do star alignment and integration of the images. When debayering it doesn't recognise the CFA  (Error: Unable to acquire CFA pattern information: Unavailable or invalid image properties). However I switch from auto to a specific pattern (bayer/mosaic pattern) it then works. I don't currently know the pattern for sure but I think its BGGR (checking that directly with Atik). My understanding is that Auto used to work on earlier Pixinsight version but now it doesn't (not sure if that's correct?) - either way that's an issue.

So after registering and integrating the frames, the resultant RGB combined image seems to have little or no color. It doesn't matter whether I stretch the data, play around with the curves, try color colibration, etc, its just generally gray. The only coloor that seems to appear is the background gets light color when saturation is pushed to the limit. I've also tried looking at an individual debayered frame before intregration and that's the same. I've also tried non-calibrated frames and still the same.

I also tried every pattern in addition to BGGR and still no better (some gave a green hue overall) - still no general colors.  I was using VNG as the demosaicing method but bilinear and superpixel didn't work either.

Attached are the statistics (clippe dand unclipped) for a single raw frame that was debayered with BGGR / VNG (no calibration or registering) - doe sthis data look ok?

I have a 30'' scope and the frame length was 15 seconds - this is a lot of data with this aperature and proves more than enough with LRGB imaging so I'd be suprised if its a weak signal. Maybe I'm doing someing fundamentally wrong when saving the data but I'm saving RAW / FITS so I'd be surprised.

Thanks in advance for any assistance.

David












15
General / FLATS QUESTION
« on: 2018 February 14 01:51:28 »
Hi,

Presently I only take darks and bias frames and no flats. This is because I have to return my 30'' scope to a cabin after observing and the width of the entry means I can't leave a camera/filter wheel sticking out sideways, it will get knocked.

I'm aware of the importance of taking flats with the focus position remaining as per the light frame captures, also the camera rotation itself.
So here's my question:

How accurate must this position be? For example if I carefully remove the camera then return it to the scope the following day without adjusting the focuser 'and' noting the rotational position, will that be good enough. The rotation is probably the hardest to replicate but within 1 or 2 degress ok but no if it has to o.1 degrees?

Thanks

David

Pages: [1] 2