Author Topic: PixInsight 1.5: Prerelease Information  (Read 58643 times)

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
PixInsight 1.5: Prerelease Information
« Reply #15 on: 2009 April 17 04:53:12 »
Hi Bitli,

Quote
Fantastic. I can't way. Well, in fact I can wait, please take your time to make it solid.


Thanks! Be sure it will be rock-solid ;)

Quote
The only annoyance I see is the disappearance of the informal textual information for the processes. In the absence of a comprehensive documentation, it was sometime the only conveniently reachable information, even if lacking in some cases.


It hasn't disappeared at all, actually :) Oops! I just forgot to include a screenshot showing how to access process descriptions. Here's is one:

http://forum-images.pixinsight.com/legacy/1.5-preview/ProcessDescription-1.jpg

Processes that provide non-empty descriptions generate a special description tag link on Process Explorer. When you double-click the link, a window pops up showing the information text, as before. The only thing that has changed is the way to access the information, but it's still there :) If I only had time enough as to sit and write good descriptions for all processes...

Note also that all new tools, and many old ones, have now a lot of tool-tip information text that should help to understand most process parameters.

Quote
Having a configurable target for help could allow other people to contribute to this effort, hopefully without too much coding effort.


Of course this is an excellent idea. I am open to know how I can facilitate that kind of contributions (a wiki would be really great IMO), which are extremely welcome :) I'll think on a way to implement what you suggest.

By the way, I forgot to say you thanks for your list of keyboard shortcuts. I should have published it (:oops:) but you anticipated. Well done and thanks! ;)

Quote
Hope skies are cloudy, so you can work on PI :-)


Oh, actually, you don't have to worry about that: I work on PI even during the clearest new moon nights, I must confess... It's years now since my last imaging night. But I'm very very happy each time I see your beautiful images ;)
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline vicent_peris

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 988
    • http://www.astrofoto.es/
PixInsight 1.5: Prerelease Information
« Reply #16 on: 2009 April 17 04:54:28 »
Quote from: "catalinfus"
And second of all...I think plenty of astroimagers are using modded DSLR like Canon/Nikon and find color correction a PIA (at least I'm hoping I'm not the only one who stumbles on that regularly :-)))   )...PixInsight's color correction tool is intended to be used for that with success also?


Hello,

yes, without doubt ColorCalibration will do the same work as in the NGC7331 image for DSLR images. In fact, for the same object, there is no reason for thinking that a CCD image will be better calibrated than a DSLR one. I will upload some example with one of my DSLR images...


Regards,
Vicent.

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: PixInsight 1.5: Prerelease Information
« Reply #17 on: 2009 April 17 04:57:56 »
Hey David,

Quote
Glad to see the script is useful 8). But I have one question: I bet that aggregated image is pure black in at least half its area. Doesn't that alter the BackgroundNeutralization's work?


It is indeed *very* useful as a gatherer of subimages :)

Black pixels are automatically rejected (ignored, actually) by both BackgroundNeutralization and ColorCalibration, so there's no problem at all with them ;)

Quote
I'll wait for 1.5 to be released and will explore some other features, namely the possibility to access the desktop background and icons, that Juan anticipated


Keep your fingers warm, you'll have these things ready for testing in a few days ;)
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Harry page

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1458
    • http://www.harrysastroshed.com
PixInsight 1.5: Prerelease Information
« Reply #18 on: 2009 April 17 06:47:14 »
Hi Juan

Good to see some much needed tools and look forward to being able to use it!


When you have finished ver 1.5   I am sure we can come up with a list for 1.6


Regards Harry
Harry Page

Offline ManoloL

  • PixInsight Addict
  • ***
  • Posts: 220
PixInsight 1.5: Prerelease Information
« Reply #19 on: 2009 April 17 10:27:49 »
Quote from: "vicent_peris"


yes, without doubt ColorCalibration will do the same work as in the NGC7331 image for DSLR images. In fact, for the same object, there is no reason for thinking that a CCD image will be better calibrated than a DSLR one. I will upload some example with one of my DSLR images...


Regards,
Vicent.


Hola Vicent:

¿Será un problema el hecho que relato en
  http://pixinsight.com/forum/viewtopic.php?t=1085
En esencia las imágenes apiladas con DSS pueden tener los máximos en los alrededores de 0.996, cada canal con valores ligeramente distintos, y no en 1.00000.

Saludos.
Saludos.

Manolo L.

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
PixInsight 1.5: Prerelease Information
« Reply #20 on: 2009 April 17 12:57:02 »
Hola Manolo

Quote
¿Será un problema el hecho que relato en http://pixinsight.com/forum/viewtopic.php?t=1085


En absoluto (y por cierto, perdón por habérseme pasado ese mensaje tuyo).

El hecho de que DSS produzca valores máximos diferentes de la unidad es irrelevante aquí. De hecho, es más bien un buen síntoma, que indica que ningún píxel está saturado en tu imagen. El hecho de que tus imágenes raw sí estuvieran cortadas en 1 inicialmente no es tampoco algo sorprendente. Piensa que al haber promediado un conjunto de n imágenes cada píxel es el promedio de n píxeles originales, con lo cual un píxel final sólo puede valer 1 si da la casualidad de que todos los píxeles originales valen 1 en las mismas coordenadas, lo cual es improbable.

Los valores máximos o la relación entre ellos no van a afectar a los algoritmos utilizados en ColorCalibration. Estos algoritmos son extremadamente robustos, y realizan el muestreo de los datos independientemente para cada canal de color.

Ahora bien, lo que no es recomendable es utilizar ninguna de las opciones de "calibración" o "ecualización" de los valores del fondo disponibles en DSS, si después vas a utilizar nuestro BackgroundNeutralization. Hay dos razones principales. Una es que al hacer esto estamos duplicando tareas, lo cual sólo puede conducir a degradar los datos. Otra es que en DSS estas operaciones aplican transformaciones no lineales, si no estoy equivocado. En PixInsight, tanto BackgroundNeutralization como ColorCalibration aplican estrictamente transformaciones lineales.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline catalinfus

  • Newcomer
  • Posts: 18
    • http://www.catalinfus.ro
PixInsight 1.5: Prerelease Information
« Reply #21 on: 2009 April 17 12:57:37 »
Hi Juan and Vicent,

hehehehe...I just realized now that I read a little in diagonal the mail in the morning :-)
Indeed the problem that you solve now in 2 steps using Background Neutralization + Color Correction...was the hardest to fix in case of modded DSLR. The NGC 7331 example is REALLY what I needed ...
The problem that I faced is when I have done DBE without equalizing color channels  and the background got very well color corrected but the nebulas/globular/galaxy that are out of backgr corrected area, remained with a red color shift...
I just can't wait to have 1.5 and reprocess all my images :-)!

All my best!!!

P.S. I'm still waiting for the sun.... :-))))))[/quote]

Offline ManoloL

  • PixInsight Addict
  • ***
  • Posts: 220
PixInsight 1.5: Prerelease Information
« Reply #22 on: 2009 April 18 00:23:13 »
Hola de nuevo Juan:

¿Y no nos puedes adelantar algún módulo tal como los de BackgroundNeutralization y ColorCalibration, para entretenernos en estos días lluviosos y dar un repaso a nuestras imágenes aparcadas a la espera de estas utilidades?
Tengo la sospecha de que si no lo has hecho es que no funcionan con la V1.4

Saludos.
Saludos.

Manolo L.

Offline ManoloL

  • PixInsight Addict
  • ***
  • Posts: 220
PixInsight 1.5: Prerelease Information
« Reply #23 on: 2009 April 18 01:48:00 »
Quote from: "Juan Conejero"


Ahora bien, lo que no es recomendable es utilizar ninguna de las opciones de "calibración" o "ecualización" de los valores del fondo disponibles en DSS, si después vas a utilizar nuestro BackgroundNeutralization. Hay dos razones principales. Una es que al hacer esto estamos duplicando tareas, lo cual sólo puede conducir a degradar los datos. Otra es que en DSS estas operaciones aplican transformaciones no lineales, si no estoy equivocado. En PixInsight, tanto BackgroundNeutralization como ColorCalibration aplican estrictamente transformaciones lineales.


Hola de nuevo Juan:
Yo que por ahora vengo calibrando, registrando y apilando con DSS tengo eso claro en lo que se refiere al ajuste de canales RGB, pero como suelo apilar con Sigma-Clipping (casi es un milagro que no se me meta algún avión en todos los encuadres) me han aconsejado que utilice la Calibración de Fondo Por Canal.

La documentación del programa (DSS) dice:

-Con la opción de Calibración de Fondo Por Canal el fondo de cada canal es ajustado en forma separada para coincidir con el fondo de la imagen de referencia.

-Con la Calibración de Canales RGB los tres canales rojo, verde y azul de cada archivo light son normalizados al mismo valor de fondo que corresponde al mínimo de los tres valores medios (uno por cada canal) calculado a partir de la imagen de referencia.
Además de crear imágenes compatibles (amigables con el apilado) esta opción tambien crea un fondo gris neutral. El efecto secundario es que la saturación general de la imagen apilada es relativamente bajo (estilo escala de gris).

Es importante seleccionar una de estas opciones cuando se utilizan los métodos Kappa-Sigma Clipping o el Kappa-Sigma Clipping Median para asegurarse que las imágenes a ser apiladas tienen todas el mismo valor de fondo.

Yo descarté la Calibración de Canales RGB, pues estaba claro que sus efectos deterioraban las posibilidades de ajustar posteriormente los colores con Pixi.  Pero a la vista de tu respuesta me asalta la duda de si también la opción que uso afecta al posterior tratamiento de la imagen con PixInsight.
De ser esto así, la única salida que tengo para seguir usando DSS para calibrar, dado que es un programa de todo o nada, que no permite obtener solo las imágenes calibradas, seria realizar el apilado guardando las imágenes intermedias registradas. Luego prescindiría de la imagen apilada y podría utilizar las intermedias, en formato FIT, para apilarlas con la nueva herramienta ImageIntegration.
Saludos.

Manolo L.

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
PixInsight 1.5: Prerelease Information
« Reply #24 on: 2009 April 18 03:16:08 »
[Summary in English below]

Hola Manolo

Aquí hay varias cosas distintas. Yo me refería al ajuste de canales RGB para conseguir un fondo neutro.

Otra cosa muy distinta es la normalización de las imágenes durante la integración. Esto último no es que sea recomendable, es que es absolutamente imprescindible. Lo que en nuestro ImageIntegration se llama normalization (normalización), en DSS se llama calibración de fondo. Bueno, no es exactamente lo mismo porque nosotros hacemos más cosas aparte de compatibilizar los fondos de las imágenes, pero en cuanto a la función que se desempeña se trata del mismo concepto.

Copio la información pertinente desde la documentación de DSS en Inglés (en la traducción al Español he visto algunas inexactitudes importantes):

Quote
Background Calibration

The Background Calibration consists in normalizing the background value of each picture before stacking it.

The background value is defined as the median value of all the pixels of the picture.

Two options are available.

* With the Per Channel Background Calibration option the background for each channel is adjusted separately to match the background of the reference frame.
   
* With the RGB Channels Calibration the three red, green and blue channels of each light frame are normalized to the same background value which is the minimum of the three medians values (one for each channel) computed from the reference frame. On top on creating compatible images (stacking wise) this option is also creating a neutral gray background. A side effect is that the overall saturation of the stacked image is quite low (grayscale look).

It is important to check one of these options when using Kappa-Sigma Clipping or Kappa-Sigma Clipping Median methods to ensure that the pictures being stacked have all the same background value.


El último párrafo es crucial. Al realizar la integración (apilado) de un conjunto de imágenes utilizando un algoritmo de rechazo de píxeles como sigma clipping, lo que pretendemos es excluir (rechazar) los valores de píxel que son identificados como outliers (perdón, no recuerdo cómo se dice técnicamente esto en Español ahora), es decir, valores que no son (estadísticamente) representativos del verdadero valor de un píxel en la imagen. Ejemplos típicos son una traza de avión o satélite, o un rayo cósmico.

Pero para que el algoritmo de rechazo de píxeles funcione correctamente, es necesario que todas las imágenes tengan valores compatibles entre sí. En el caso de sigma clipping, lo que queremos es que la curva del histograma que formamos con una pila de píxeles a partir del conjunto de imágenes que estamos integrando presente un único pico (o sea, que sea unimodal) y tenga la mínima dispersión de valores posible. Menos gráficamente, se pretende minimizar la distancia media entre todos los valores de píxel en las mismas coordenadas. De lo contrario, la desviación estándar deja de ser una estimación representativa de la dispersión real, la mediana deja de ser una buena estimación del valor central, y sigma clipping funciona mucho peor que un simple promedio de las imágenes sin rechazo. La normalización (o calibración del fondo en DSS) sirve precisamente para conseguir compatibilizar las imágenes en este sentido.

Cuestión distinta es cómo se realiza la normalización. Nosotros aplicamos transformaciones estrictamente lineales, que es como se tiene que hacer este trabajo, en nuestra opinión. La normalización que se aplica para el rechazo de píxeles no tiene por qué aplicarse necesariamente después durante la integración (por ejemplo, para calcular la media de los píxeles que sobreviven al rechazo); al menos en nuestro caso esto es opcional. Cuando publiquemos la 1.5 haremos nuevos videotutoriales en los que explicaremos cómo funciona ImageIntegration y cómo hay que realizar la normalización.

Con respecto a DSS, mi recomendación en principio es que selecciones la primera opción, es decir "Per Channel Background Calibration", y que realices la neutralización del fondo después en PI con BackgroundNeutralization (cuando 1.5 esté publicada, claro :) ).

=========================================================

Summary in English -- Manolo asked about background calibration procedures applied in DeepSkyStacker, and their repercussion for subsequent processing in PixInsight, specifically with the new BackgroundNeutralization and ColorCalibration tools available in the upcoming 1.5 version. My recommendation is to apply the "Per Channel Background Calibration" option when integrating images with sigma clipping rejection. Instead of using the "RGB Channels Calibration" option, my advice is to use the new BackgroundNeutralization tool available in PI 1.5.

When we integrate (stack) a set of images using a pixel rejection algorithm, such as sigma clipping for example, we pursue to exclude (reject) those pixel values that are identified as outliers, or values that are not (statistically) representative of the true value of a given pixel in the image. Typical examples are planes, satellites and cosmic rays.

But, for a pixel rejection algorithm to work correctly, all images being integrated must have compatible pixel values. In the case of sigma clipping for example, we want the histogram curve formed with a stack of pixels from the set of integrated images to show a unique peak (unimodal distribution) with the minimum possible dispersion. Less graphically, we want to minimize the mean distance between all pixel values at the same image coordinates. If this condition is not met, the standard deviation can't be a good estimate of the actual dispersion of values, the median can't be a good estimate of the actual central value, and then sigma clipping works much worse than a simple average of images without rejection. Normalization (or background calibration in DSS) is intended precisely to achieve compatibility between images, in the sense that we have just explained.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline twade

  • PTeam Member
  • PixInsight Old Hand
  • ****
  • Posts: 445
    • http://www.northwest-landscapes.com
PixInsight 1.5: Prerelease Information
« Reply #25 on: 2009 April 18 15:44:27 »
Juan,

These will all be fantastic additions to an already awesome program.  I especially look forward to using the BackgroundNeutralization and ColorCalibration tools.  As someone who is slightly red/green colored blind, I'll put it to good use. :D

I also look forward to using ImageIntegration.  I've always had a question about the best way to combine all the images once their outlying pixels have been rejected.  It appears we have several choices as mentioned in the quote below:

 
Quote
Mean, median, minimum and maximum image combination operations.


When working with light frames, which method is best?  I would think one would want to use all available frames.  This would obviously rule out median, minimum, and maximum since a single pixel would be chosen from the appropriate frame to generate the new image.  As a result, mean would likely be a better candidate for light frames, right?  Not being a mathematical genius, I'm not sure if mean truly takes advantage of all the frames either.  Perhaps, you could enlighten me?  What little knowledge I possess in the world of image integration, I would think the best solution is in the form of a sum.  Are there any plans to have a Sum as a choice in PixInsight?  I would think this would be the best way to ensure you are taking advantage of all the available data; however, it does have the disadvantage of increasing noise, especially in the background.  Does anybody else have any opinion on the best way to combine light frames?  I just don't want to be using a method that doesn't take advantage of my hard earned data.

Thanks,

Wade

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
PixInsight 1.5: Prerelease Information
« Reply #26 on: 2009 April 18 18:20:24 »
Hi Wade,

Thank you!

Quote
Does anybody else have any opinion on the best way to combine light frames? I just don't want to be using a method that doesn't take advantage of my hard earned data.


Mean combination (averaging pixels) provides the highest SNR improvement. The median, minimum and maximum operations can be used in special cases, but as a general rule, always use mean combination to integrate light frames.

Quote
Are there any plans to have a Sum as a choice in PixInsight?


A straight sum of pixels has two problems:

- The sum can easily yield values outside the available numeric range. When that happens, we have to choose between (1) truncate out-of-range values, which means a loss of data, or (2) rescale the resulting image, which means a loss of dynamic range. Both options are very ugly.

- Even if the result doesn't overflow, by summing pixels we are not improving the SNR significantly, since both the signal and the noise are accumulated in the same way.

Mean combination improves the SNR because the noise, having a random distribution, tends to cancel out since its mean is equal for all pixels, while the signal tends to stay invariant as it is the same in all images.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline twade

  • PTeam Member
  • PixInsight Old Hand
  • ****
  • Posts: 445
    • http://www.northwest-landscapes.com
PixInsight 1.5: Prerelease Information
« Reply #27 on: 2009 April 18 18:31:16 »
Juan,

Thanks for the excellent explanation.

I do have one more question.  When using mean, how does the "target build up" as you combine more images.  For example, do you generally build up the same amount of signal taking twenty 5-minute exposures as you do taking ten 10-minute exposures?  If this is true, I'm having a hard time visualizing how this can be since all twenty 5-minute individual exposures will have 1/2 the signal as each of the 10-minute exposures.  I just don't see how you can build up to the 10-minute exposure signal by using a mean value.  It seems to me, the 10-min exposures will capture more photons in the extremely faint regions of the target; whereas, the 5-min exposures may never capture such a photon at all.  Is this logic incorrect?  Have I missed something?

Thanks,

Wade

Offline Jack Harvey

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 975
    • PegasusAstronomy.com & Starshadows.com
Sum vs Mean
« Reply #28 on: 2009 April 19 10:56:13 »
Although we always use mean, one of the SSRO members continues the discussion:

Quote
  ah...now i see the pixi point -- yes, mean combine DOES do what they say, but they don't talk about any actions AFTER the mean combine...if you stretch AFTER MEAN combining, you wind up with the same data as if you summed


Some comments?
Jack Harvey, PTeam Member
Team Leader, SSRO/PROMPT Imaging Team, CTIO

Offline Nocturnal

  • PixInsight Jedi Council Member
  • *******
  • Posts: 2727
    • http://www.carpephoton.com
PixInsight 1.5: Prerelease Information
« Reply #29 on: 2009 April 19 11:05:34 »
If you do a linear stretch then you could end up with the same data, yes. Given that we work with 32 bit floats between 0 and 1 it makes little sense to do linear stretches.
Best,

    Sander
---
Edge HD 1100
QHY-8 for imaging, IMG0H mono for guiding, video cameras for occulations
ASI224, QHY5L-IIc
HyperStar3
WO-M110ED+FR-III/TRF-2008
Takahashi EM-400
PIxInsight, DeepSkyStacker, PHD, Nebulosity