PixInsight Forum (historical)
PixInsight => Release Information => Topic started by: Juan Conejero on 2010 March 22 17:55:03
-
Here is a first screenshot of the new ImageCalibration tool:
(http://forum-images.pixinsight.com/legacy/1.6-preview/ImageCalibration-1-tn.jpg)
<Click here to enlarge (http://forum-images.pixinsight.com/legacy/1.6-preview/ImageCalibration-1.jpg)>
The interface still needs some polishing (and a master dark-flat frame option will be added), but right now it's working very well. So far all tests have shown that the wavelet-based dark frame optimization routine works remarkably well, even better than expected. The optimization routine finds the dark scaling factor that minimizes dark-induced noise to within 0.001 fractional accuracy. More testing is necessary though, and I'm still trying to reduce execution times.
Remember that this is just a first version of the calibration tool!
More news very soon...
-
Look great,
Optimization and the overscan use looks interesting.
I would be nice to know how the Masters are generated.
Is this a separate function?
It would be nice when output of calibration can be linked to registration and integration in the process container.
Will we have the ability to do this starting with 1.60?
Max
-
oooo... me want :)
Now the question... will be handle Canon raw files "properly", or will the files still have to be converted to grey tiffs first then debayered later?
-
Now the question... will be handle Canon raw files "properly", or will the files still have to be converted to grey tiffs first then debayered later?
Yes, will handle them "properly". :)
V.
-
Hi
Will it have the ability to create a OSC flat ?
Harry
-
will be handle Canon raw files "properly"
There should be no problem (well, almost no problem, and I explain the meaning of "almost" below) to calibrate Bayerized images with the new ImageCalibration tool.
In RAW settings (Format Explorer > select the DSLR RAW format > click Edit Preferences) select the following to load a pure RAW Bayer image each time PixInsight opens a DSLR RAW image file:
- White Balance: both options disabled.
- Create super-pixels disabled.
- Create RAW Bayer picture enabled
- No black point correction enabled
In this way what PixInsight loads is the true raw image stored in the .cr2 or .nef file (or equivalent with other cameras/formats) with each color plane separate as an individual RGB channel.
The ImageCalibration tool will calibrate each color channel as if it was an independent image. An individual channel of a Bayer image has "black holes" (nonexistent pixels in the Bayer pattern for the corresponding color), but black (and white) pixel values are ignored in statistical terms, and bias, dark and flat corrections work pixel-by-pixel (x/0 and 0/0 divisions will not happen due to Bayer holes; they are prevented), so the calibration procedure should work without problems. There is no need to preconvert the images to grayscale before calibration.
Now the "almost". The only problem with Bayer patterns can be dark optimization. Right now I am unsure about the performance of our algorithm when it has to deal with Bayer patterns. The holes will be interpreted as small-scale noise and this will fool the wavelet-based noise estimation routine. I haven't tested this yet, but that's what will happen, I think. The solution to this problem is not very difficiult though; I'll work on it as soon as possible.
Will it have the ability to create a OSC flat ?
As noted above the ImageCalibration tool can work with multichannel images. Each channel is calibrated as an independent frame. The flat is no exception. You can integrate your individual flat frames with ImageIntegration to generate a master flat, which can then be used with ImageCalibration.
-
Look great,
It would be nice when output of calibration can be linked to registration and integration in the process container.
Will we have the ability to do this starting with 1.60?
Max
-
the question repeated...
It would be nice when output of calibration can be linked to registration and integration in the process container.
Will we have the ability to do this starting with 1.60?
Max
-
Hi Max,
With the 'scriptable' capabilities of PI, I see that the whole process of 'image reduction' could be contained in a 'single-click' process - but would you REALLY want that?
Personally, it is far more important to me that I KNOW how each of the processes has worked. Have my Darks introduced any artifacts? What about my Flats and their FlatDarks, did everything work there? What about when I finally calibrate my Lights - did I introduce any 'nasties' during those steps?
If by this stage everything has seemed to be OK, THEN (and only then) might I consider deBayering (if needed) - making sure that this next stage works as well.
Finally, I am ready to align - and would want to inspect the results of this stage before (at last) moving on to Image Integration.
So, yes, I could see a very simple process container that would have all the required stages identified, simply requiring initial pointers to the source data files but, unless an APOD popped out at the end of the process, I think I would want to know that nothing went out of control during the process.
We'll have to wait and see what the implementation looks like.
Cheers,
-
"It would be nice when output of calibration can be linked to registration and integration in the process container" (MMirot)
"With the 'scriptable' capabilities of PI, I see that the whole process of 'image reduction' could be contained in a 'single-click' process - but would you REALLY want that? Personally, it is far more important to me that I KNOW how each of the processes has worked (N.Saunders)
I think that both matters are compatible, and necessary:
1) We can test the quality of the outpout of each process, isolated, as Nials want.
2) And we need handle a lot of lights, darks, flats and bias, for the pre-proccesing task, first to calibrate, and then to register and stack, for finally to get our single image, that can be processed individually with all the tools of PI. The pre-proccessing task isn`t an artistic task for the user (not for Juan), but mainly a rutinary task for them (sorry for the words, but they express well the idea). So, the lot of work necessary for pre-processing is very well suited for a computer and their software, in PI with the Process Container, that can launch together calibration, register and integration processes (previously controlled in their performance). Really pre-processing is a task very well suited for Process Container, probably the most.
Francisco
-
"It would be nice when output of calibration can be linked to registration and integration in the process container" (MMirot)
"With the 'scriptable' capabilities of PI, I see that the whole process of 'image reduction' could be contained in a 'single-click' process - but would you REALLY want that? Personally, it is far more important to me that I KNOW how each of the processes has worked (N.Saunders)
I think that both matters are compatible, and necessary:
1) We can test the quality of the outpout of each process, isolated, as Nials want.
2) And we need handle a lot of lights, darks, flats and bias, for the pre-proccesing task, first to calibrate, and then to register and stack, for finally to get our single image, that can be processed individually with all the tools of PI. The pre-proccessing task isn`t an artistic task for the user (not for Juan), but mainly a rutinary task for them (sorry for the words, but they express well the idea). So, the lot of work necessary for pre-processing is very well suited for a computer and their software, in PI with the Process Container, that can launch together calibration, register and integration processes (previously controlled in their performance). Really pre-processing is a task very well suited for Process Container, probably the most.
Francisco
I agree with Franciso
Max
-
One possible "solution" for handling the bayer "holes" is to create a "mask" to ignore the holes (if that makes any sense).
-
Hi,
as long as the whole process to arrive at the final stacked image is not much more complex than with DeepSkyStacker, I am going to be happy :). However, if it needs a lot more mouse cklicks and interaction, then :'(
Georg
-
Hi Juan
Can you confirm a couple of things about OSC flats for this dumb person ???
You say that I will not need to grayscale my flat does this mean
1) I will not need to debayer my flat or do I
2) Is the colour of the flat therefore irrelevant
Regards Harry
-
Hi Juan, it really looks great. With the addition of this module I will be able to process my images 100% in PixInsight ;)
-
Hi Juan,
All these new features in version 1.6.0 are really useful and will make all of us much happier with PI, probably they all deserve a major number change!. I think it would be useful to have the chance of escalating the Master Dark for callibration. Sometimes I use darks of different exposure time than my light frames so I escalate them with a coefficient that matches the equivalent exposure time. The escalation could be done with PixelMath prior to callibration, substracting the Master Bias, then multipliying by the scalation coefficient and then adding the Master Bias again, but it would be smarter to do it all automatically and to compatibilize it with the Master Dark optimization.
I shoot with a DSLR camera, so I'm curious to know how do you plan to overcome the problem of optimizing a bayerized Master Dark.
Sergi
-
Hi Sergi,
I think it would be useful to have the chance of escalating the Master Dark for callibration.
It is already implemented and as far as we've tested it is working really well. It is the "dark optimization" feature. In fact, dark optimization is enabled by default. From our experience, dark optimization is important when it comes to get the most out of the data during post processing, even in temperature controlled scenarios.
I shoot with a DSLR camera, so I'm curious to know how do you plan to overcome the problem of optimizing a bayerized Master Dark.
We have two possibilities:
- When the source image is a DSLR RAW image decoded through dcraw (which the standard DSLR RAW module is based on), the optimization routine already knows the Bayer pattern of the image, so it's a matter of fixing the black holes on each color plane.
- When the source image is a Bayer FITS image (OSC CCDs etc.), we have no (standard) way to know what the Bayer pattern is. For these cases I'll implement a heuristic method to detect the Bayer pattern automatically. Actually, the heuristics are rather simple: start looking for zero pixel samples at regular, repeating intervals: 1001001001 .... After a few rows and columns you know how the Bayer pattern looks like for sure.
The problem is how to fix the Bayer "holes" in a way that our multiscale noise estimation algorithm can work reliably. This is not very easy; I am still working out a solution. One possibility is to perform a super-pixel deBayering on the fly. The super pixel image would be used for noise evaluation and then discarded. As the super pixel method is fast and does not interpolate (except to average two green pixels, which I think isn't a problem here), it seems quite suitable. One key factor here is computational cost, as the dark optimization routine tends to be a bottleneck.
-
Hi Harry,
1) I will not need to debayer my flat or do I
No you won't. The ImageCalibration tool applies the flat correction pixel by pixel, so as far as the Bayer pattern is the same for all images, there should be no problem at all (divisions by zero flat pixels are detected and avoided).
2) Is the colour of the flat therefore irrelevant
ImageCalibration treats each color plane as an independent image. It is flexible enough to work with grayscale or RGB calibration frames, including flats. This means that you can use either a grayscale or a RGB master flat frame to calibrate a RGB image. So from ImageCalibration's perspective, it doesn't matter if it's irrelevant or relevant :)
-
Hi Max,
It would be nice when output of calibration can be linked to registration and integration in the process container.
Will we have the ability to do this starting with 1.60?
As long as you keep track of the output and input directories and images on each process, I see no problem to create a ProcessContainer to implement the whole task.
For this particular task however, a script would be much easier to maintain and adapt to your specific requirements. Such a script is extremely easy to write; I'll put some examples. It will be basically a matter of copying and pasting a few lines of code.
-
I assume that the DSLR raw processing will allow for the production of master darks and flats as well..
A suggestion for FITs files that require debayer, simply have a setting that tells you what the pattern is.
-
Great Juan.
A script would fine.
Perhaps you can add the option to not write the intermediate files.
I am not sure I need to fill up my hard drive with extra images.
Max
-
Wow - I'm currently on my trial licence getting to know this beast.
Image calibration was a big concern for me, but wasn't likely to be a show stopper. I was actually planning to try out PixelMath + ImageContainer sometime soon to see if that was workable.
I've only just found these threads relating to 1.6.0 - very reassuring to see so much effort going into the next version on very useful functions. I can feel my wallet itching..... :)
-
Hi Rob,
Well, my experience with a FULL 'manual' calibration, alignment and pixel-rejecting stack procedure has been VERY encouraging. I am not using the world's most sophisticated CCD (Meade DSI-IIPro), and I am NOT imaging from a mountain-top in the middle of a desert (coastal NE Scotland !!), but I did end up with one of my 'cleanest' images ever - from only 30-odd sub-five-minute subs.
Even the 'manual' process is NOT that difficult - you just have to be 'organised', methodical, and meticulous in your attention to detail.
And, what you 'learn' from the discrete steps involved will make the subsequent transition to the 'automated' process much easier.
That said, I wouldn't be waiting for 1.6.0 to calibrate further data gathering sessions - I doubt whether I will EVER now turn to any other software to perform this task for me - they just are NOT as good (can't comment on Maxim or CCDStack - for the same reasons that I can't comment on driving a Lamborghini ::))
HTH
Cheers,
-
I always make my calibration manually. But one thing you cannot do manually is dark optimization (well... you can do that, but only for 1 - 2 images >:D). This feature will be of great help, specially with cameras that have a weak refrigeration. I have found that dark optimization is a great improvement for cooled cameras with high temperatures (-10ºC CCD temp). Dark noise must be rescaled even for small temp oscillations (+-0.1ºC).
Vicent.
-
Hi Vincent,
When I ran my 'manual' calibration procedure recently, I was dealing with DSI-IIPro images, from a NON-cooled imager. All I had to work with is the 'recorded' CCD temperature on an image-by-image basis (the Meade Envisage software records this in the FITS header of every image) - however, there is NO guarantee that the recorded temperature is not just the incidental temperature that was downloaded from the camera just as the exposure download was completed (in other words, the temperature is NOT 'averaged' for the duration of the exposure).
So, I usually expect to see a 'range' of temperatures in my imaging sessions - because I can't 'control' temperatures. What I do try to do thereafter, when it comes to calibration time, is to at least 'match' imaging temperatures during the calibration stage (i.e. Lights and Darks, or Flats and FlatDarks). But, even then, I only have a 0.5 degree C resolution on the DSI cameras.
However, when I started to use PixInsight for my calibration process, I realised that very careful control of the Winsorised 'clipping' parameter sliders meant that I could reliably combine data with temperature ranges as much as +/- 2.5C.
Obviously, this may NOT reflect what would happen with a significantly more sophisticated, TEC cooled (and temperature regulated) CCD - but ther very fact that closed-loop regulation of the CCD WOULD be possible is likely to mean that the 'Sigma Clipping' routines that I have been using could be applied with even LESS 'aggression', and would therefore be even NMORE effective.
For the record, I was aiming to 'clip' around 0.5% of total pixels from each of the upper and lower ranges, for a 'total clip' of only 1% of image data from each 30-subframe data group (be this Lights, Darks, Flats or FlatDarks) and, as I said, the end result was - to me - VERY impressive when compared with ALL previous calibration attempts, irrespective of siftware package being used.
So, I will be looking closely at the new Calibration routine(s) and will be seeing how to implement them (automagically) to most closely resemble the 'manual' method that I am currently working with.
Cheers,
-
Today is (hopefully) the day.
V.
-
:) :) 8)
-
Is it april fools day >:D
No oh well it must be right then ;D
Harry
-
And it comes with a new article. :D
V.
-
V.,
I am waiting for this since months, and now you make the last 24 hours unbearable ... ;) ...
Georg
-
Hi
Quote
And it comes with a new article.
Now I know you are joking ;D
Harry
-
Well,
If the v1.6 'beta' is anything to go by - we are all in for a 'treat'
Cheers,