PixInsight Forum (historical)

PixInsight => General => Off-topic => Topic started by: georg.viehoever on 2010 February 03 00:39:14

Title: Image Integration, Winsorized Clipping, Default
Post by: georg.viehoever on 2010 February 03 00:39:14
Hi,

a couple of questions about ImageIntegration:
- The purpose of Winsorized Sigma Clipping is to remove outliers from the data. Causes for outliers can be hot pixels, airplanes passing through the image, ... . But will it also reduce noise?
- What would be typical pixel rejections rates? Applying the default sigma of 2.0 on my Canon EOS 40D images yield something like 10-15% rejection. Using a sigma of 3 is around 1%, which seems to be more reasonable.
- Considering that the noise characteristics of Canon EOS 40D images are different for the 3 channels (red has the most noise), would it be necessary to have different sigmas for the 3 channels?
- From reading other posts, I have the impression that "Weights=Noise Evaluation"+"Winsorized Sigma Clipping" are the most powerful options. Why aren't they the defaults that are selected when starting the ImageIntegration dialog?

Cheers,
Georg
 
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: dhalliday on 2010 February 03 04:03:47
Georg
This is new to me..
Are you talking about stacking subs/calibrating them..?
My understanding was that Pix did not even do this..
I need to hear a bit more about your question...
excuse my ignorance here :-[
I know when I stack subs in DSS these sorts of features are available (ie averaging around a "sigma" value..)
What Pix trick/tool am I missing out on here..?

Dave
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Jack Harvey on 2010 February 03 06:00:37
Dave the Image Integration tool is a real gem.  It is used to stack subs.  After you have calibrated your images and registered them you then open Image integration and load a stack of images (usually from a specific filter, i.e. Red filter).  Then set the parameters.  I am not going to go throuhg all but will tell you I use Average, Normailzation is additive  and Scaling and weighting by Noise.  For Pixel Rejection I use WIndsorized Clipping and Scale and Zero Normalization.

Hit the button and you get three frames.  THe rejected high and low pixel frames and the Integration Frame which is you image with lots of the noise now removed.
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Harry page on 2010 February 03 09:39:46
Hi George

From my understanding the sigma clipping only removes noise as  " unwanted pixels from things like hot pixels plane trails etc " the noise reduction comes from std average stacking of you images ( This part is no better than other programs as its simple maths )
I personally find no benefit from separating my rgb and use a sigma setting from 4 to 8  and a average of 5 and usually only reject probably less than o.25 % of pixels
If you reject to many good pixels you can end up with a noisier image  ???

I agree generally about default settings and in general it would be nice if we could save new default settings on all tools  ;D  ( Please Mr Juan )

Dave

Now you have not been watching these video's again    >:D    , we will have to send you to stand in the corner  :'(

Pixinsight has excellent stacking properties , its only calibration that can not be done easily  ( yes I know you can do it in pixel maths  :footinmouth:)


Go and have a look and be impressed


Harry
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Niall Saunders on 2010 February 03 09:41:36
Hi all,

I have spent a LOT of time messing around with ImageIntegration, and have had quite a few exchanges with Juan on the whole issue of 'robust statistical analysis' - which is at the heart of II.

I would love to be able to give you a detailed insight, but I am 'lost in France' at the moment, sruck in some crummy hotel who seem to think that InternetAcees was a rock'n'roll band from the late '60s  :yell:

Maybe I can get something written up 'offline' - but I wouldn't then have access to a lot of my notes and scribbles. So, unless someone else wants to step up to the mark, my response might take a week or so to put together.

Cheers,
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Niall Saunders on 2010 February 03 09:45:52
Quote
yes I know you can do it in pixel maths

Unfortunately, no - you can't  :'(

Not if your 'source' images (Bias, Flats, FlatDarks) are initially ('natively') FITS 32-bit Float images, with data in the 00000 to 65535 range.

I have yet to see my reported 'bug' problem actually get resolved. There has been NOTHING that I have been able to do to get the image data correctly scaled, and that currently means that all of my attempts at calibration within PI will fail.

The latest suggestion that I have seen is that I need to change from a wireless keyboard to a wired keyboard - but I think that is a suggestion better reserved for April 1st !!!

Cheers,
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: mmirot on 2010 February 03 10:54:15
Hi George

From my understanding the sigma clipping only removes noise as  " unwanted pixels from things like hot pixels plane trails etc " the noise reduction comes from std average stacking of you images ( This part is no better than other programs as its simple maths )
I
Harry

I find the rejection integration and integration is much more powerful the most if not all programs I have tried.

It does more than removing airplane trials. It will improve you image's S/N if done right. 

You should dither your sub images to receive the Maxium benifit. 

Generally 1-1.5 or less percent are need to be rejected.

This varies a lot sky conditions, dither, camera etc. 25% rejection is way to much.
If you have CCD with fewer hot pixels you will see/ need less % rejected.

My CCD is fairly easy to see if the rejection is good. The sensor is big enough there are plenty of cosmic ray hits and often a sat. trial to reject. I move the slider up until they are pretty much gone. I see the result on the high Map after STF.

I watch the S/N outputs on consul if you doing better then numbers decrease. I wish knew more about the consul output. Ie S/N verses S/N increament. Someday they will let me in on the secret.  ???

Max



Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Harry page on 2010 February 03 11:11:18
Hi

Just to repeat I did say    Zero . 25 %   or 1/4 of a percent not 25 %   :P

Yes I get the best results from using the tools in pixinsight     :D    if you want the best S/N do not reject any pixels at all  ,  Pixel rejection does not increase your signal to noise   :)
Using the noise evaluation  ( weight ) make the most of your images and this is where the improvement come from !

I will also say again 1 to 1.5 percent rejection on average is to high ( This is based on a 4 meg image ) ? I examine my rejection maps and change the settings till only the hot pixels / planes / sats etc are shown , there should not generally be a background showing in these maps !

As you know you can see on the council the amount of rejected pixels per image

All this is of course my opinion , which is often wrong  :yell:

Harry

Title: Re: Image Integration, Winsorized Clipping, Default
Post by: mmirot on 2010 February 03 12:11:23
Hi

Just to repeat I did say    Zero . 25 %   or 1/4 of a percent not 25 %   :P

I will also say again 1 to 1.5 percent rejection on average is to high
As you know you can see on the council the amount of rejected pixels per image




I gave 1 to 1.5 percent as upper limit.
If you have more than this normalization if often not working well.
This can happen if add in some images with bad gradients weak clouds etc.
I  rejection will improve you S/N if you dither them.
Then non performing pixels are rejected out of the set of subs.

Max
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: dhalliday on 2010 February 03 12:42:10
Yikes !!
More gadgets...
Isn't this (sort of) what DSS does for me...???
I see it has the usual 8-28 "parameters"...
Still another weapon in my quivering belt... >:D
Perhaps one of you would be good enough to say a little more about this tool...and how it differs from DSS...
I have to say I like it !!
A "baseline" cookbook style set of settings would be a good starting point...What "percentile" rejection is a good one for example,etc.

Harry,I must have missed all this,AND the video...
Boy....my Pix powers will be up their with Dr Evil soon.. >:D

Dave
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Harry page on 2010 February 03 12:48:12
Hi Dave

go watch http://www.harrysastroshed.com/Stack.html (http://www.harrysastroshed.com/Stack.html)

As I have not used Dss I can not tell you how iy differs  ???


Harry
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: dhalliday on 2010 February 03 13:14:41
Harry
Just did my homework...can I have a cookie now..?

Well obviously I will have a "throwdown" between DSS and Pixinsight...winner takes my business.. >:D
But I will still use DSS to "calibrate"...
I understand Luc,the developer of DSS is/was a Pix user.
Maybe he does not image these days...just wondered why I never see him around (here)...
? Software politics ??

Thanks again Harry!!
What NUMERICAL value is "sigma"...and why choose 5..??

Dave
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Harry page on 2010 February 03 13:23:17
HI

I do not know the tech answer for sigma,   But I know that the higher the number the less the number of pixels rejected  8)

This number is not set and varies due to things like signal , but mainly the number of images you are stacking  with more images Pi is able to decide with more
certainty which are the outliers  :)  so you have to experiment to get the right number   not to high and not to low  :o

Harry

Go on then only One cookie  ;D
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Simon Hicks on 2010 February 03 13:29:58
DSS is so simple and does all the other stages of alignment as well. Its a joy to use (well done Luc). However, PI is pushing the limits and I suspect its somehow going to be better.  So how do I convince myself to learn how to do everything in PI?

Can anyone else tell us how the PI stacking differs from DSS? Is the PI version better? If so, why? Are there any comparitive results?

And assuming I start using the PI route for stacking....should I still use DSS to do the initial calibration (darks/flats/bias) and alignment....and just output 32bit calibrated/aligned images to read into PI?
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Jack Harvey on 2010 February 03 13:44:38
Harry  You can have your defaults saved.  In tips and tricks see Basic Processing Icons.  If Image Integration is in your set with the proper defaults then you are good to go by starting a processing session with loading your Basic Processing Icon set
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: dhalliday on 2010 February 03 17:15:11
Simon
I think that is a good (if delicate) point...!!
I bodged around with Pix vs DSS a bit tonight...darned if I can see a huge difference...
Having said that maybe Jack or Juan,or one of the "Jedii" can weigh in on this... >:D

Having said that I am love with other features of Pix...
Just because she has the same shoes as DSS doesn't mean she does not have other great qualities...etc,etc,gag...
Interestingly when I happened to "autostretch" some of my stacks,and look at them "up close"...I see (to my horror  :surprised:)
the same kind of linear "streaking" I was troubled with in DSLR work...
Maybe I should just not look so closely...

Dave
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: georg.viehoever on 2010 February 04 00:16:53
Simon,

...
And assuming I start using the PI route for stacking....should I still use DSS to do the initial calibration (darks/flats/bias) and alignment....and just output 32bit calibrated/aligned images to read into PI?

this is what I am currently trying to do. I use DSS to get the calibrated images (dark+bias+flats), and use PI to get registration and integration. So far I can tell you that it takes much longer time (due to all those fiddle factors and tricks that you can play with PI), but the results are promissing. More when I have something to show.

Georg
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: mmirot on 2010 February 04 08:17:17
I am sure most of the time DSS does a very good Job.

Now can someone tell me
what is the " Gaussion Noise estimate" and "average S/N increament" measurement in the Consul output.?

Max
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: vicent_peris on 2010 February 04 09:37:21
Hi,

the ImageIntegration implements my idea of looking for a minimum noise in the resulting image. Noise is evaluated through an algorithm designed by Jean-Luc Starck, where the basic idea is to measure noise where there are no significant wavelet coefficients. The combination of these two ideas, IMHO, is a killer. It's simple, because looks for exactly what we want: to minimize noise. And is robust because the noise estimation is not influenced by local contrast. Just make a comparison without rescaling through noise estimation.

The gaussian noise estimate in the console tells us the noise amplitude of the combined image, and the average S/N increment is the increment in signal to noise ration from the reference subexposure to the combined image.


Vicent.
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Harry page on 2010 February 04 09:39:51
Hi

Dam vincent beat me to it , I was going to say that Honest   O:)



Harry
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: mmirot on 2010 February 04 10:22:25
Thanks

Max
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Niall Saunders on 2010 February 04 10:26:19
In my 'playing sessions' with ImageIntegration - just before I was shipped out to this infernal 'net-less hell-hole, I found that it paid HUGE dividends to repeat the II process SEVERAL times (and Juan has made that so fantastically quick for use by clever re-use of previously calculated data !!).

Each time I tweak ONLY one of the Sigma sliders (and, obviously, I am talking about Winsorized Sigma Clipping here), and do NOT show the 'other' clipping image (and, if you are listening Juan, I would like to be able to disable the stacked image as well for these iterative tests). Then I throw an Auto-STF at the clipped image I am trying to evaluate, and decide whether more or less clipping is needed.

And, always, LESS CLIPPING (i.e. LARGER Sigma) is better - the limit being that you do want to clip out the rubbish (the corollary being that you do not want Sigma so low that you are clipping out 'useful' data).

Once I have - say - my Upper Limit set, I turn OFF that output clipping image and repeat for the Lower Clipping Sigma value.

Now that I know to look at the summary data in the processing console, I will have a better idea of whether I am clipping 'too much', by way of percentage (more of a problem for the 'dark side', because - at least with the 'light side' you can determine 'when' you have clipped the satellites, planes and cosmic rays, etc.)

I would still have liked the ability to set the 'target percentage' and then see PI iterate the process as needed - returning the Sigma value needed to achieve that level of requested clipping.

And, as I said, it would be useful to be able to avoid screen clutter by ignoring the actual integrated image until the clipping points have been established.

Perhaps the clipped images should also have to be capable of always being output to the same image as well (don't think that they are, at the moment), and perhaps they should be capable of being auto-STF'ed at the same time.

This new section in the PI armoury is, I believe, soon going to be one of its most powerful selling points. It needs a VERY CAREFULLY thought out user interface, backed up with easily assimilated statistical data extracted from all the images in the process 'bucket'.

I also think that the 'Series Measurement' tool available in AIP4WIN needs to be incorporated as soon as possible, with the addition of the ability to 'graph' the statistical data acquired by that process - and a 'live slideshow' that can identify which 'bad image' was causing the 'spike' in the series analysis graphical display - which then provides the ability for that image to be 'dumped' from the current data set.

PI has more than enough power to do all of this - it is just going to take the input from YOU GUYS to give Juan an idea of 'how' the GUI is going to work best.

Sorry - I need to calm down, and I must remember to breathe in between paragraphs.
And to KEEP ON TOPIC [insert 'Topic Police' here]
:police:

Cheers,
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: georg.viehoever on 2010 February 04 11:03:44
Niall,

..
I would still have liked the ability to set the 'target percentage' and then see PI iterate the process as needed - returning the Sigma value needed to achieve that level of requested clipping.

And, as I said, it would be useful to be able to avoid screen clutter by ignoring the actual integrated image until the clipping points have been established.

Perhaps the clipped images should also have to be capable of always being output to the same image as well (don't think that they are, at the moment), and perhaps they should be capable of being auto-STF'ed at the same time.
...
...and a 'live slideshow' that can identify which 'bad image' was causing the 'spike' in the series analysis graphical display - which then provides the ability for that image to be 'dumped' from th
...

you describe exactly the procedure that appeared useful when I played with ImageIntegration, and I think that the tools you describe in the quote above would be really useful. One word on the live slideshow tool: With DSS, I have been screening my images using the FWHM information that you get after calibration. But infortunately, DSS is quite slow when displaying images or changing its STF. Something like that is really missing from PI. And if it would be faster than DSS when loading images and displaying them... Wow...

Cheers,
Georg
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Niall Saunders on 2010 February 08 05:30:06
Hi Georg,

Well, it will be down to Juan now - hopefully he will get the chance to read these posts, and then can try and incorporate our suggestions.

Cheers,
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Harry page on 2010 February 08 09:28:21
Hi

I might be stupid ( Ok No comments )  , but I do not see how the percentage thing will work as every image / sub has a different amount of rejected pixels

so how will a fixed figure give best results ?

Harry
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Niall Saunders on 2010 February 08 11:55:26
Hi Harry,

Let's say you have 2k x 2k images - 4megapixels each. You set the WinSigClip High slider to 2.8 and the HighClip image appears with 20k pixels 'set'. You change the slider to 3.8 and the same clip image now only has 2k pixels 'set'.

In the first example the number of 'clipped' pixels represents about 0.5% of the overall number of pixelsin the image. In the second example the percentage drops to 0.05%.

The 'slider setting' remains set for the whole Integration run - and the 'percentage' result isn't on an image-by-image basis - it applies to the 'results'.

What I still need to have clarified (by Juan) is what the ADU value at a given pixel location, in each of the 'clip images' actually represents.

Is it a purely 'boolean' value - set to '1' when a pixel needed to be 'clipped' in at least one of the source images? Or does it behave as a 'counter', incrementing for every image that requires that pixel to be clipped?

Any info, Juan?

Cheers,
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Harry page on 2010 February 08 12:21:06


Ok I am stupid  8)

Harry
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: dhalliday on 2010 February 08 12:54:52
"boolian"...???
A great big Booya to all of you...!!
(CNBC Jim Cramer-Mad Money)

My jury remains out on "winsorized"...
Sometimes I tell a drug rep that I am not going to likely switch to there new (copycat) product,because I don't like the name...!
This goes for "winsorized"...

Is it related to the queen somehow..?
You Brits..!
See my comment in the wavelet noise reduction thread...
I am getting lost...
But slowly found...that wavelet trick of Jacks is potent !

Dave
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Niall Saunders on 2010 February 08 15:54:52
'Winsorisation' is a term that has come about as a result of the surname of the kind chap who 'invented' it - Mr. Charles Payne Winsor, who died in 1954 (if I remember correctly). Unfortunately we Brits cannot lay claim to his genius (this time) - he was American born and bred - and in no way related to our Royal family, the "Windsors" ::)

Don't dismiss your jury yet - I think that you will find more and more good reasons to rely on Winsorized Clipping :)

Cheers,
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Juan Conejero on 2010 February 09 01:37:09
Hi,

Again, trying to keep myself up-to-date with this extremely interesting thread. Don't be surprised, though, if my answers form an unordered set, as I'm collecting them throughout the thread seen "as a whole". Here we go.

Quote
(and, if you are listening Juan, I would like to be able to disable the stacked image as well for these iterative tests)

Good point. I'll implement an additional check box in the next version.

Quote
What I still need to have clarified (by Juan) is what the ADU value at a given pixel location, in each of the 'clip images' actually represents.

Is it a purely 'boolean' value - set to '1' when a pixel needed to be 'clipped' in at least one of the source images? Or does it behave as a 'counter', incrementing for every image that requires that pixel to be clipped?

A pixel in a rejection map image has a real value directly proportional to the number of rejected (=clipped) source pixels in the integrated set at the corresponding map coordinates.

Rejection maps, as you probably have guessed at this point, are normalized to the [0,1] range. If a rejection map pixel is 1.0, that means that all source pixels have been rejected at the map pixel coordinates. If a rejection map pixel is 0.0, then no source pixel has been rejected at its coordinates. Put as a simple equation:

r = Nr / N

where r is a rejection map pixel, Nr is the number of rejected pixels at r coordinates, and N is the number of integrated images.

Note that a rejection map can be used as a mask. For example, you can apply a low-pass filtering process (e.g., with wavelets) to the integrated image masked with the high rejection map (or with the maximum of the low and high rejection maps) to compensate for the lack of signal resulting from pixel rejection during integration. This is an effective noise reduction technique where you can take advantage of the —extremely unusual— fact that you know accurately where the noise is, and its relative amplitude.

Quote
I would still have liked the ability to set the 'target percentage' and then see PI iterate the process as needed - returning the Sigma value needed to achieve that level of requested clipping.

Correct me if I haven't understood what you want to do, but I detect some statistical misconception here. The fact is that you really don't want to know what the sigma value has to be in order to reject a given fraction of pixels in the integrated stack. You want to find an optimal sigma value that achieves a good compromise between efficient rejection and minimal SNR degradation. Usually the process to find the correct sigma is iterative: you implement a sort of binary search where the correct value is within the limits of a narrower boundary at each iteration.

If you want to know that sigma value in advance because you already know how many pixels you want to get rejected, then you don't need sigma clipping at all, but something without any statistical basis, such as min/max clipping.

In cases where you don't have enough images to substantiate a sigma clipping procedure (say less than 8 or 10 images), you can use either percentile clipping or averaged sigma clipping. Both algorithms utilize robust statistics and provide excellent results with reduced data sets.

Quote
ImageIntegration implements my idea of looking for a minimum noise in the resulting image. Noise is evaluated through an algorithm designed by Jean-Luc Starck, where the basic idea is to measure noise where there are no significant wavelet coefficients.

It cannot be overemphasized that the original —and extremely brilliant— idea of noise estimation/minimization is due to Vicent Peris.

For those who want to know the original sources, the iterative wavelet-based noise evaluation algorithm that I have implemented has been described here:

Jean-Luc Starck and Fionn Murtagh, Automatic Noise Estimation from the Multiresolution Support. Publications of the Royal Astronomical Society of the Pacific, vol. 110, February 1998, pp. 193-199.


Okay, let's return to work. I'll try to answer more questions later on this thread.
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Niall Saunders on 2010 February 09 04:59:13
Quote
A pixel in a rejection map image has a real value directly proportional to the number of rejected (=clipped) source pixels in the integrated set at the corresponding map coordinates

That's what I had hoped to be the case (and it was what my experiments were tending to suggest) - so, by careful analysis of the 'cliiping images' it becomes possible to determine whether a pixel site is/was 'bad' over the entire range of source data. I can, once more, see another use for the 3-D viewer script (a script that I feel could now be considered for implementation into the 'core' of PI)

Cheers,
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Niall Saunders on 2010 February 09 05:16:01
Quote
Correct me if I haven't understood what you want to do, but I detect some statistical misconception here. The fact is that you really don't want to know what the sigma value has to be in order to reject a given fraction of pixels in the integrated stack. You want to find an optimal sigma value that achieves a good compromise between efficient rejection and minimal SNR degradation. Usually the process to find the correct sigma is iterative: you implement a sort of binary search where the correct value is within the limits of a narrower boundary at each iteration.

If you want to know that sigma value in advance because you already know how many pixels you want to get rejected, then you don't need sigma clipping at all, but something without any statistical basis, such as min/max clipping.

In cases where you don't have enough images to substantiate a sigma clipping procedure (say less than 8 or 10 images), you can use either percentile clipping or averaged sigma clipping. Both algorithms utilize robust statistics and provide excellent results with reduced data sets.

OK - life is easier on the 'high end' clipping - I can iteratively adjust the High Sigma value until I eliminate (for example) ONLY those pixels hammered by incoming cosmic rays, or aircraft trails, etc. That is (relatively) easy. And, once I have tweaked the High Sigma to achieve 'just' that level of elimination, I can even then tweak the value down, just a little, to give a 'safety margin', if you will (tweaking DOWN to clip a few EXTRA pixels, just in case).

But, at the bottom end, I have less to go on. Sure, I can move the Low Sigma slider about, and can see my ClipLow image showing more, or fewer, clipped pixels as I make the adjustments. But, what do I use to determine WHERE to leave the slider setting? This was where I felt that some sort of 'empirical' decision might need to be made - for example, I might decide to be willing to 'clip' a total of 0.1% of all 'low end outliers'. I can then iteratively adjust the Low Sigma slider until the "Nr/N" calculation tells me that the total number of pixels that have been 'Low Clipped' (i.e. the cumulative ADU sum of all pixel values in the low-clipped image DIVIDED BY the simple number of pixels in the integrated image) reaches the target percentage of 0.1%.

My desire was to 'automate' this iterative procedure, such that I could 'request' a desired percentage of pixels to be clipped (whilst still relying on Winsorization) and the algorithm would set the Sigma slider accordingly. I don't actually end up 'caring' what the Sigma value becomes - but I see this as being 'different' to the algorithm that simply clips the requsite percentage of pixels WITHOUT the 'robustness' of Winsorized analysis.

Am I just trying to be 'too clever' here ???

Is there a 'better' way of trying to decide where to set the Sigma slider (especially for the more difficult ClipLow case)? In other words, is there a way to aim for 'maximum SNR' in the background (which is where the ClipLow will be affecting)?

Is there even an argument for using one variation of pixel clipping for the low-end data, and another for the high-end data?

My head hurts :'(

Cheers,
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Harry page on 2010 February 09 10:06:45
Hi

As I said before , why this thing for a fixed percentage ,  you only want to exclude the outliers  ???

I can manage to do a good job with the high setting and also the low setting , forgive me but we seem to be making a simple job hard  O0

Thats my 2p worth ignor me if you like  :P

Harry
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: georg.viehoever on 2010 February 09 10:07:25
Niall,
...
My desire was to 'automate' this iterative procedure, such that I could 'request' a desired percentage of pixels to be clipped (whilst still relying on Winsorization) and the algorithm would set the Sigma slider accordingly. I don't actually end up 'caring' what the Sigma value becomes - but I see this as being 'different' to the algorithm that simply clips the requsite percentage of pixels WITHOUT the 'robustness' of Winsorized analysis.
..

I am not sure, but percentile clipping may just be what both of use want.  There is no real need to have estimates of the mean or sigma in this case, so also no need for winsorization http://en.wikipedia.org/wiki/Winsorising.  ???

Georg
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Harry page on 2010 February 09 10:18:30
Hi

Yes this is what I think , just use the percent clip if you want to

I only use the win / sig clip as I think this gives the best results you can achieve  :D and makes the most of my data  :sealed:


Harry
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Niall Saunders on 2010 February 09 11:39:49
Quote
forgive me but we seem to be making a simple job hard

No - that's what I am trying to 'avoid'. What I am trying to do is to establish a set of guidelines that allow you to set the LowClip Sigma slider to a 'level most appropriate' for your source data.

Now, we all have different data sets - so what works for one set may, or will, not work for the next. What, therefore, are the criteria that 'you' use to set the LowClip Sigma slider? When do 'you' decide 'yep, that's about right'? What makes 'you' decide 'nope, that's too much' or 'nope, not enough'?

Right now, I am of the (personal) opinion that I don't want to clip 'too much' of the bottom end data. "How much is too much?" is not a question I have an answer for - hence the decision to sacrifice some random, empirical percentage (either by asking PI to sacrifice 'exactly' that specific percentage, or by just tweaking the LowClip Sigma slider until I feel that the target percentage has been clipped.

However, what I really want to do is to maximise the background SNR - the aim we all have after all. And, as I see it, once we have acquired our basic data, and calibrated and aligned it to the best of our abilities, the 'only tool' we have left is "Image Integration" - either within PI (naturally :P) or by 'other means' >:(

So, unless someone can give me (show me, remind me of) a worked example of how to determine the 'ideal' position for the LowClip Sigma slider when using Winsorized Clipping, I don't actually know of a better way than the two (near identical) methods that I describe above.

And, setting a 'Percentile Clipping' point is NOT the same as setting a 'Winsorized LowSigma Clip' point in order to include a certain number of clipped pixels - at least that is not how 'I' understand things. Perhaps I need to run a set of data using both methods - aiming for the same number of clipped pixels in each case (i.e. the same 'percentage' of clipped data in each case) - and then compare the resulting two LowClip images.

By my deductions these will NOT be 'the same'. It will be interesting to see if they are. And, if they ARE different, which set of 'clipped data' represents a 'more robust' data extraction?

Sorry if this all sounds confusing - perhaps it is because IT IS  ::)

Cheers,
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Harry page on 2010 February 09 13:18:22
Hi


Ok we differ on what we exspect  :P

Harry
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Niall Saunders on 2010 February 09 14:29:25
It's not that we 'differ' as such Harry - what I am trying to do is figure out some sort of 'rule' for establishing the position of the LowClip Sigma slider.

How is anyone supposed to know whether it is too high, or too low? Apart from just picking a number, at random, and then trying to rectify any errors that are then introduced by relying on subsequent processing.

That just goes against the grain. The whole point about PixInsight is a 'scientific' approach to data manipulation. If we are just going to 'chuck a number at it' then we might as well use PS - after all, the airbrush works wonders (I even came across an article where the recommended approach to eliminating background noise was to determine the 'average colour' for the background, select and delete the original background, and then replace it with a blended in layer of synthesised colour sprinkled with some Gausian noise - what the hell is THAT all about?).

So, I am relying on "minds immeasurably superior to mine" to come up with some sort of rational for setting the LowClip Sigma slider - so that Winsorized Clipping can apply its FULL power to my data :cheesy:

In the meantime, I just have to figure out how I am supposed to open my Calibration images without PI corrupting them beyond all hope of salvation >:(

Cheers,
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Juan Conejero on 2010 February 09 15:46:52
Quote
In the meantime, I just have to figure out how I am supposed to open my Calibration images without PI corrupting them beyond all hope of salvation

I have good news: I am adding built-in support for the FITS, TIFF and JPEG formats in PJSR (by mirroring the corresponding PCL classes). In this way you'll be able to manipulate FITS files directly from JavaScript code, circumventing the standard FITS support module, and having access to the actual data stored in the files (without rescaling to [0,1]). This means that writing a little script to convert your odd floating point FITS images into regular 16-bit integer FITS images will be extremely easy (right now this can only be done with PCL in C++). Once converted, you'll be able to open your images without problems in PI.

This will be available in the next version, which I'm working on now. Patience...  8)
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Niall Saunders on 2010 February 09 17:14:37
Thanks Juan,

I appreciate the effort - but I still don't understand why I cannot enter '65535' in the FITS Explorer window - as discussed in the Bug Reports section.

Surely this is a 'bug'?

I even tried to do this via a TeamViewer 'remote control' session today - with exactly the same results.

There is nothing 'odd' about my FITS data format - it passes FitsViewer inspection with no problems - and I have never failed to be able to open the images in any other software. I just happen to be using 32-bit floating point data storage - and PI happily opens that data format ALL THE TIME.

The problem ONLY exists when PI rescales to [0,1] - and even then it is ONLY a problem if the image does not have a MAX ADU value, and yet the data must be scaled to fit the [0,1] range as if it DID have such a MAX ADU value (of 65535, in this case). In other words, if the ACTUAL maximum ADU value in an image being opened was, say, 32767.00000000 - then it should be rescaled to 0.500000, not (as is the current case) to 1.00000

Honestly, right now, this bug is not just a 'gripe' for me - the inability to be able to rescale correctly is a PixInsight 'showstopper' for me.

I am trying to analyse my calibration images in PI - and NONE of these will EVER have a maximum ADU value anywhere near 65535, and so they will ALL be incorrectly scaled if I cannot set the "Default Floating Point Input Range" parameters to 0.000 and 65535.000.

Perhaps tomorrow I will try installing PI on a second PC - just to see if it is 'this' (Vista 64-bit) installation that doesn't work.

Cheers,
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: NKV on 2010 February 09 21:36:11
I am trying to analyse my calibration images in PI - and NONE of these will EVER have a maximum ADU value anywhere near 65535, and so they will ALL be incorrectly scaled if I cannot set the "Default Floating Point Input Range" parameters to 0.000 and 65535.000.

Perhaps tomorrow I will try installing PI on a second PC - just to see if it is 'this' (Vista 64-bit) installation that doesn't work.
If the installation succeeds, try to see on your screen the fractional part. For example, try to see a pixel with a value of 32768.5
I see instead 32768.5 only 32768 or 0.500015258789063
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Juan Conejero on 2010 February 10 02:18:13
Quote
I still don't understand why I cannot enter '65535' in the FITS Explorer window - as discussed in the Bug Reports section.
Surely this is a 'bug'?
I even tried to do this via a TeamViewer 'remote control' session today - with exactly the same results.

No other user has reported a similar problem, and I have no way to reproduce it. If this is a bug, it is certainly an obscure, hardware-related bug. Let's see if the new version, which uses a new version of the Qt library (4.6.1), solves this issue.

In the meanwhile, if you have a wired keyboard at hand (from other computer for example), you could try replacing your current wireless one.

Quote
There is nothing 'odd' about my FITS data format - it passes FitsViewer inspection with no problems - and I have never failed to be able to open the images in any other software. I just happen to be using 32-bit floating point data storage - and PI happily opens that data format ALL THE TIME.

I've said it's odd, not incorrect as per the FITS standard :)

The oddity is in the fact that your camera control software is storing raw CCD data as floating point numbers. This is a conceptual error because raw CCD observations are discrete by nature, and hence they must be stored as integer numbers, which is what they are. Not to mention the fact that 16-bit integer data are being stored as 32-bit numbers, which wastes a 50% of storage space unnecessarily and tends to provide a false impression of increased accuracy.

Floating point pixel storage must only be used when the data are being represented as real or complex numbers for a plausible reason. This only happens when the data don't correspond to direct, physical, raw observations. Real or complex floating point representations are often plausible when the data are being post-processed.

The fact that a given file is correct as per applicable standards does not imply that it is valid from a computational point of view.

All of the above is just my opinion, which is, as every opinions, open to discussion.

Quote
The problem ONLY exists when PI rescales to [0,1] - and even then it is ONLY a problem if the image does not have a MAX ADU value, and yet the data must be scaled to fit the [0,1] range as if it DID have such a MAX ADU value (of 65535, in this case). In other words, if the ACTUAL maximum ADU value in an image being opened was, say, 32767.00000000 - then it should be rescaled to 0.500000, not (as is the current case) to 1.00000

Agreed. But as you know, this happens because you cannot specify a custom FITS input range for floating point data, due to a problem with your keyboard, as we've discussed above.

Quote
Honestly, right now, this bug is not just a 'gripe' for me - the inability to be able to rescale correctly is a PixInsight 'showstopper' for me.

Okay. Let's solve the problem in a truly dirty way (which, as all dirty solutions, works. Only who has programmed in assembler knows the true meaning of this sentence, so I'm sure you know what I mean here).

Do the following:

1. Define any FITS floating point input range using the FITS Format Preferences dialog. That is, define that incredibly odd range that you get with your keyboard. The actual values don't matter; we only want to be sure that the corresponding configuration data are being generated.

2. Exit PixInsight.

3. Open Windows Explorer and locate PixInsight's configuration file. It is the following file on Windows Vista:

%APPDATA%\Pleiades\PixInsight.ini

where %APPDATA% is usually:

C:\Users\<user-name>\AppData\Roaming

on Windows Vista and Windows 7.

4. The configuration file is a plain text file. Open it with a good code editor (PixInsight's Script Editor can be used, but then you'll need to make a temporary copy of the file, as PixInsight is using it).

5. Search for the following two lines of text (they should be together in the file, and in this order):

ModuleData\FITS\FITSLowerRange=<whatever1>
ModuleData\FITS\FITSUpperRange=<whatever2>

where <whatever1> and <whatever2> are the strange values that you entered on the FITS Format Preferences dialog. Change these values so that the two lines are exactly as follows:

ModuleData\FITS\FITSLowerRange=0
ModuleData\FITS\FITSUpperRange=65535

6. Save the file.

7. Launch PixInsight. Now your FITS files will load OK.


let me know how it goes.
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Juan Conejero on 2010 February 10 03:09:57
Hi Nikolay,

Quote
If the installation succeeds, try to see on your screen the fractional part. For example, try to see a pixel with a value of 32768.5
I see instead 32768.5 only 32768 or 0.500015258789063

In PixInsight, floating point real-valued pixels are internally represented in the [0,1] range, where 0 means "no signal" (usually black) and 1 means "full signal" (usually white). There are several reasons, ranging from performance reasons to elegance and simplicity reasons.

The [0,1] range is what we call normalized real range. Strict normalization of real and complex pixels is a strong design principle in PixInsight that pervades it at all levels; it must not be seen as a problem or a deficiency, but as an identity sign and a powerful feature. Working with normalized real pixel readouts has several important advantages:

- Normalization leads to an abstract representation of pixels. Real and complex pixel data are treated in purely mathematical terms, not tied to the particular range of any physical device (as the 16-bit range of a CCD camera). Any image processing algorithm can be described and implemented in a much more natural and accurate way thanks to this abstract representation of the data.

- The [0,1] range is much easier to understand and manipulate than other ranges associated to physical devices. A normalized value of 0.5 is clearly in the middle of the available dynamic range in any context. A value of 32767 is in the middle of the numerical range in the context of a 16-bit CCD camera, but not at all, for example, if we are working with a 32-bit integer image.

- Normalization allows us to work independently of the actual numerical range of the data. In PixInsight, you get normalized readouts for 8-bit, 16-bit, 32-bit integer pixels, as well as for 32-bit and 64-bit real and complex pixels. For example, a PixelMath expression and a JavaScript script work exactly in the same way, irrespective of their target image's data type.

- Normalization provides us with a unique reference framework to express pixel sample values beyond images. For example, in the HistogramTransformation interface you work in the [0,1] range to define histogram clipping points and midtones balance values. This is an abstract representation that works equally for all data types supported (and for all data types that will be supported in the future).

So you can't get 32767.5 as a pixel readout because real pixel readouts are normalized to [0,1] in PixInsight. You can get integer pixel readouts in the 16-bit range, where 32767 does make sense, but not 32767.5 because it is not an integer.
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Niall Saunders on 2010 February 10 03:40:59
Quote
Let's solve the problem in a truly dirty way

Now THAT is what I wanted to hear - if I can get this to work, I think that my problem should disappear.

However, this means that I will have to press the PixInsight 'Exit' button - and I haven't had to do that since early January :'(

I might even have difficulty FINDING the button ;D

And, I am going to lose huge numbers of WorkSpace Layouts (seven, if I remember correctly - each with many images and many ProcessIcons). I will have to take a decision now as to whether I will need beer, whisky, wine or coffee to keep me calm whilst PI is not running. In fact, my wife actually believes that I have changed operating systems from Windows Vista to PixInsight, as that is all she ever sees on my monitor  :angel:

I will try this at lunchtime, remotely - using TeamViewer.

Thanks again for your patience and understanding Juan.

Cheers,
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Juan Conejero on 2010 February 10 04:27:06
Quote
However, this means that I will have to press the PixInsight 'Exit' button - and I haven't had to do that since early January

Quite a stability test, indeed.  8)

You actually don't need to exit PixInsight :) Open a second instance of the PixInsight Core application and define your FITS input range. Exit the second instance and follow steps 3 to 6, but make the changes only for the second PixInsight instance.

All PixInsight instances share a unique PixInsight.ini configuration file. Within PixInsight.ini, each instance has its own block of configuration settings, starting with [000], [001], [002], and so on. So you just have to search for "[001]" to locate the settings block corresponding to the second instance, then proceed as explained. As you are not modifying any settings for the first (and running) instance, there will be no problems when you save the modified PixInsight.ini. Your second instance of PixInsight will load your FITS files correctly, then you can save them as 16-bit integer FITS files and open them in the first instance. Nice, isn't it?  O0
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Niall Saunders on 2010 February 10 13:46:49
Hi Juan,

Yes, your 'dirty workaround' does work.

My Bias Frames, which originally have a nice Gaussian distribution curve, centred around a typical Mean of about 3970, with a StDev of about 40 and a Minimum of around 3780 and a Maximum of around 4170, now appear as they should when I open them - with the 'peak' way down at the bottom end of the [0,1] range.

Obviously, if I now apply the ReScale process, or if I use the Histo [ClipLow/ClipHigh] process, then I get what I was seeing before - the same very nice Gaussian distribution, but it is now more or less centred on 0.500, and with a Min/Max of 0.000 and 1.000 respectively. Absolutely NON-typical of a Bias frame - and therefore useless for further processing as such.

However, I still cannot use the ImageIntegration process - there is obviously an internal rescale() call that is being applied. I cannot tell whether you call this prior to processing each image, or whether you rescale the final integrated image after 'combining'. In any case, because the rescale call IS made, once again the resulting image has a (now 'very' nice) Gaussian curve - but back to being centred around 0.500

Am I missing some critical point here? Why can the image data not just be left 'as is' after the combination step?

NONE of the four possible combination methods (Average, Median, Maximum, Minimum) are mathemagically capable of generating an output image with 'out of range' values - providing the original images were all within range themselves. OK, sure, at SOME POINT during the 'Average Combine' process, the 'working image' might contain values that can be up to 'n-times' out of range - but as soon as that 'summed' image is then divided by 'n' again (to provide the 'averaged' result) the image MUST return back to 'in range' again.

I just don't see the need for a hard-coded rescale() call. You could, if needed, provide it as an option in exactly the same way as is done in PixelMath.

Which is a case in point - I can use PixMath to 'add' all the images together, and 'divide-by-n', and I get EXACTLY the result I am after (obviously without the Winsorized Clipping that I get from ImageIntegration). But, if I enable the ReScale option in PixMath, I lose the correct 'position' of the Gaussian curve - the Mean of the rescaled image ends up back at 0.500 again.

Don't get me wrong - there IS a possibility that I will NEED to have the image 'rescaled' for it to be useful as a calibration frame. I haven't got that far - my brain seems to be 'hung up' on these 'entry-level' issues.

But, I really do feel that - if you ARE automatically implementing a rescale() call - then I (for one) would like to be able to play with ImageIntegration with at least the OPTION of having the call being made, or not.

I hope I have been able to explain myself clearly - I think my brain is on the brink of shut-down. Perhaps I need to run a defrag on it. Tequila should do the trick :cheesy:

Cheers,
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: dhalliday on 2010 February 15 13:34:49
Wow...this is a deep thread... >:D
Just wanted to say that I FINALLY got around to doing a RGB (so far) stack with "winsorized clipping)...
It seemed to filter some crud out,and not a bad result..
I think it does a better job than DSS,but a bit fiddly.
Here is M1 (again..!) (RGB)
http://www.flickr.com/photos/daveh56/4360610524/sizes/l/

cheers all

Dave
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: dhalliday on 2010 February 16 05:48:50
Still talking to myself..?
Here is an EXCELLENT example of the power of this tool.I am loving it more and more,esp the ability to look at the rejected data.
It is the best Pix tool since DBE...!

I tried to stack 2 sets of exposures,different sky conditions,and got THIS nightmare;
http://www.flickr.com/photos/daveh56/4360801543/
But after :"winsorization"...(!!) I got this;
http://www.flickr.com/photos/daveh56/4361077951/
(well this is the end result/RGB,the first was just the R signal.)
(The frames were shot at roughly 180 degrees opposite...)

DSS could not "clean/match" the frames...despite playing with main settings. :'(

A Pix victory !! 8)
I would LOVE to hear more about the "kappa" settings,etc.

Dave
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Jack Harvey on 2010 February 16 06:01:46
M 97 does look nicely integrated
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: dhalliday on 2010 February 18 07:22:09
What I want to know is...how do we interpret the high/low rejection maps..??

I mean its ok to hear Harry say "winsor 5 high and low works for me"...(sorry Harry  >:D)
But it depends on the data..no?
I mean is a LOWER number rejecting MORE data,etc etc..?
Lots of questions remain for me on this tool...I would prefer a wee bit more insight from the Jedii...

Dave
Title: Re: Image Integration, Winsorized Clipping, Default
Post by: Harry page on 2010 February 18 09:52:33
Hi Dave

Yes you are correct when you stack more images Pi can find the outliers more easily so a higher sigma number can be used ( the higher the sigma number the less pixels are rejected)
I start my sigma setting at about 4 ( High and low ) and stack away and out put the rejection maps  O:)

I then inspect the maps , which should only really show them outliers i.e. hot / cold pixels , sat / plane trails , cosmic ray hits and any other one of events

Unless you have really bad subs ( Then you should exclude these) there should not be much in the way of background pixels included in the maps  ;D

then you can decide to lower the sigma ( Reject more if there are still outliers on your result image ) or higher it if to many pixels are being rejected  :laugh:

Remember you want to reject as few as possible or you could be throwing away good info  :'(

I tend to set my low sigma the same as the high, seems to work most of the time


Harry