Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - vicent_peris

Pages: 1 ... 4 5 [6]
76
We are going to join this two images of the comet Holmes, taken by my friend José Luis Lamadrid and me:

30x15”:



26x1':




The RGB channels of these two DSLR images have been rescaled to have the proper color balance, with a proportion of 1:2.22:2.86.

As you can see, the one minute exposure has the inner coma completely saturated, and we are going to use the 15 second exposure to recover information lost by the camera's limitations.

The first step is to calculate the fitting factor between the two images. To do this, we need to know the illumination of three regions of the images: two differently illuminated zones of the comet, and the sky background level.

We  create in the one minute image two small previews that will be the illumination references for the comet. It's important to avoid too highly illuminated pixels (due to a possible non-linearity of the sensor, specially with ABG enabled ones), or saturated stars. For a better vision of the image, we can adjust the ScreenTransferFunction, as pictured below:



We must take care of putting the low illuminated preview in an area with sufficient signal, because this preview will have a much large amount of noise in the short exposure image. In this case, the mean value for the previews in this image are:

_1min_high preview:

R: 0.2489940927
G: 0.3614411545
B: 0.4342339171

_1min_low preview:

R: 0.1453883686
G: 0.2159542766
B: 0.2605371721

Now, we must define one preview over a sky background region. Therefore, we are going to do a rather agressive STF:



Ok, now we have defined the three regions we need, but we must compare them with the information of the fifteen second exposure. Just drag and drop the preview selector (the vertical tab with the preview identifier) to the view selector tray of the other image to duplicate the previews:



Convert these previews as independent images, dragging them over the background of the application. We can rename the identifiers of the new images, as seen below, and iconize them, because we wont need the look at these images anymore:



Now, go for fun with the maths. We will directly scale the one minute exposure to fit it to the fifteen second one. Obviously, we will use the PixelMath module. The equation we have to write, according to the identifiers we're using, is below:

((_1min-Med(_1min_bg))*((Avg(_15sec_high)-Med(_15sec_bg))-(Avg(_15sec_low)-Med(_15sec_bg)))/((Avg(_1min_high)-Med(_1min_bg))-(Avg(_1min_low)-Med(_1min_bg))))+0.05

This equation will multiply the one minute image by the fitting factor. Some notes on the equation:

For the comet regions, we calculate the average (Avg function) pixel value, because we want to know the total amount of light the camera is detecting. But, for the background region, we calculate the median (Med funcion) value to prevent error measurements due to noise and stars in the area.
In the equation, we apply the fitting factor to the sky background substracted image, and after we add a little pedestal (here of 0.05) to preserve all the information in the faintest areas of the image.
Of course, we must desactivate the “Rescale result” option!


We will send the result to a new image, named _1min_rescale:



The resulting image, below, is very dark, as we are multiplying it by roughly 0.25, and has the median values of its RGB channels at 0.05:



At this point, we are ready to join the two exposures. Or not? To cover the saturated area of the longer exposure with the information of the short one, we can do a maximum operation. But doing this on the whole image is a very bad idea, because the fifteen second exposure has a lot more noise in the less illuminated areas than those in the one minute exposure. So we need a mask!

We need only to recover the information in the areas where at least one of the three RGB channels is saturated. The first step for making the mask is to calculate a black and white image where each pixel is the maximum of each of its previous RGB values. The equation in PixelMath is rather simple:

Max($target[0],$target[1],$target[2])

The output of the PixelMath instance will be a grayscale image, with the “HDR_Mask” identifier. We must apply this calculation to the original one minute exposure:



This is the resulting image:



Once we have the desired B/W image, we must decide where is the illumination limit, in wich we will superpose the short exposure image. This can be accomplished with a curve transform. In this case, the limit will be at 0.7 pixel value, with a transition zone of +-0.05. This transition is important to mitigate any small amount of error in the fitting factor. Due to the threshold nature of this mask, I think it's better to make the curve with a linear interpolation:



After aplying the curve transform, we have this image:



It's convenient to make the mask a bit smoother. This is easily done with the À Trous Wavelet tool, disabling the first layers:



This is our final mask:



Finally, we can activate the mask on the one minute scaled exposure and superpose the fifteen second image over it. To do this, we simply must substitute the one minute exposure with the fifteen second one; it's importante to substract the background level of the fifteen second image and add the same pedestal (0.05), to fit it to the other image:




This is our final result:



If we raise the midtones of the image, we will see better the whole dynamic range:


77
Hi all,


here we are going to explain, in a simple way, my technique to photograph objects with a dynamic range too large to be accomodated by the camera's dynamic range.

In the field, the techniques are the same of always: we have to make various exposures of different lenghts to cover the entire dynamic range of the object, from the brighter parts to the fainter ones. Of course, the deeper we want to go, the larger will be the dynamic range to capture.

But, where this technique differs from others is the way the high dynamic range image is composed. We are going to synthesize an image that would be similar to the one produced by an ideal camera. This camera would have pixels with a gigantic electron well (in the order of millions of electrons), thus with no virtual point of saturation. We would be able to make long exposures of the scene to capture the faintes parts, but without any saturation of the brighter ones, and the image output would use a data range larger than 16 bits. Throught this technique, I've obtained images with a real dynamic range as high as 24 bit gray levels per channel.

As you are supposing, we are going to compose the HDR image in a completely linear way. Imagine our object being photographed as a pyramid. The height of our pyramid represents the dynamic range of the object being photographed, and we truncate it horizontally in various sections, because our camera is not able to pick up the entire pyramid. We know the total height of the pyramid, but we don't know, a priori, the height of each section. This is the parameter that we are going to determine, and is the key to properly reconstruct our pyramid.

This is the basic principle. Since our camera doesn't have a sufficiently large dynamic range, we are going to superpose the shorter exposures over the longer ones, but only where the longer exposure is saturated. Being each exposure a different section of the pyramid, we have to rescale the longer image to fit it to the shorter one.

Imagine we have a light source with a given intensity. And imagine we make two photographs of this source with a different duration E1 and E2, with a one pixel camera.  If the signal recorded during E1 is S1, then the signal recorded during E2 is given by:

S2 = S1*(E2/E1)

In more practical words:

We take two exposures of the light source: one of 1 second (E1) and another of 3 seconds (E2). If during the first exposure the light produces a signal of 100 electrons, the signal for the second exposure will be of 300 electrons:

S2 = 100*(3/1) = 300 e-

To have two identical, one pixel images, we must therefore divide S2, once digitized, by a factor of three, so we will have the same numerical value in the pixel of both images. This proportion, as we have seen above, is the key to join both images.

To fit both exposures, we need a reference point. As always, in daylight photography we are in a little less than ideal scenario: our reference point is simply the exposure lenght. If we make 1/60s and 1/30s exposures, once image calibration have been made (specially the subtraction of the bias image), just divide the 1/30s image by a factor of two.

But, again, there are several factors that will do this operation useless in astrophotography. Atmospheric extinction, transparency of the sky and sky background illumination can vary between exposures. Therefore, we need to calculate the proportions between exposures directly from the objects photographed.

The idea comes from one technique used to measure the linearity of an image sensor. This idea, adapted to the problem of HDR images, is to measure the difference of illumination between two image regions of the same image, and compare this difference to the difference observed in the same regions of the other exposure. This will give us the proportion between illuminations of the two images. The equation that gives us the fitting factor, F, is:

F = (Image1_region1 – Image1_region2) / (Image2_region1 – Image2_region2)

This will correct for the atmospheric extinction and sky transparency. But will not correct for different sky background levels. So we have to modify this equation taking into account the sky brightness. We must thus substract sky background level of each image area to measure:

F = ((Image1_region1 – Image1_bg) – (Image1_region2 – Image1_bg)) / ((Image2_region1 – Image2_bg) – (Image2_region2 – Image2_bg))

This last equation provides correction for all the factors than differentiate our astrophotos from a daylight picture (excluding, of course, random noise). As it's necessary to view the problem from a practical point of view, we will continue very soon in another post.

78
Hola a todos,


voy a explicar, de una forma sencilla, mi tecnica para fotografiar objetos con un rango dinámico demasiado grande para ser recogido por el rango dinámico de la cámara.

En el campo, las técnicas son las mismas que las de siempre: tenemos que hacer diferentes exposiciones de diferentes duraciones para cubrir por entero el rango dinámico del objeto, desde las partes más brillantes a las más débiles. Por supuesto, tanto más profundo queramos llegar, tanto más grande será el rango dinámico a capturar.

Sin embargo, lo que diferencia esta técnica de las demás es la manera en la cual se compone la imagen de alto rango dinámico. Vamos a sintetizar una imagen la cual sería similar a la producida por una cámara ideal. Esta cámara tendría unos pixels con un pozo de electrones gigantesco (del orden de millones de electrones), por lo que virtualmente no tendría un punto de saturación. Entonces seríamos capaces de hacer exposiciones largas para capturar las partes más débiles, pero sin saturar las más brillantes, y la imagen de salida tendría un tamaño de datos mayor de 16 bits. Con esta técnica, he conseguido obtener imágenes con un rango dinámico real tan alto como 24 bits de niveles de grises por canal.

Como estás suponiendo, vamos a componer la imagen HDR de una forma completamente lineal. Imagina el objeto que vamos a fotografiar como una pirámide. La altura de nuestra pirámide representa el rango dinámico del objeto que fotografiado, y nosotros la truncamos horizontalmente, puesto que la pirámide es demasiado grande para nuestra cámara. Nosotros conocemos la altura total de la pirámide, pero no conocemos, a priori, la altura de cada una de las secciones. Este es el parámetro que vamos a determinar, que es la clave para reconstruir perfectamente nuestra pirámide.

Este es el principio básico. Como nuestra cámara no tiene el suficiente rango dinámico, vamos a superponer las exposiciones más cortas sobre las más largas, pero sólo donde las exposiciones más largas están saturadas. Siendo cada exposición una sección diferente de la pirámide, tenemos que reescalar la imagen más larga para acoplarla a la corta.

Imagina que tenemos una fuente de luz de una determinada intensidad. E imagina que hacemos dos fotografías de dicha fuente con una diferente duración (E1 y E2), con una cámara de un sólo pixel. Mientras la señal registrada en E1 será S1, la señal en E2 será:

S2 = S1*(E2/E1)

De una forma más práctica:

Hacemos dos exposiciones de esa fuente de luz: una de un segundo (E1) y otra de tres segundos (E2) de duración. Mientras en la primera exposición la luz produce una señal de 100 electrones, la señal en la segunda exposición será de 300 electrones:

S2 = 100*(3/1) = 300 e-

Para tener dos imágenes idénticas de un pixel, debemos por lo tanto dividir S2, una vez digitalizada, por un factor de tres, para tener el mismo valor numérico en el pixel de ambas imágenes. Esta proporción, como hemos visto arriba, es la clave para unir ambas imágenes.

Para acoplar las dos exposiciones, necesitamos un punto de referencia. Como siempre, en fotografía diurna nos encontramos en un escenario poco más que perfecto: nuestro punto de referencia es, simplemente, la duración de la exposición. Si hacemos dos exposiciones de 1/60s y 1/30s, una vez realizada la calibración (especialmente la sustracción de la imagen de bias), solamente tenemos que dividir la imagen de 1/30s por un factor de 2.

Pero, de nuevo, tenemos varios factores que harán que esta operación no nos valga de nada en astrofotografía. La extinción atmosférica, la transparencia del cielo y el brillo del fondo del cielo pueden variar entre cada una de las exposiciones. Por lo tanto, necesitamos calcular las proporciones entre exposiciones directamente de los objetos fotografiados.

La idea proviene de una técnica utilizada para medir la linearidad de un sensor de imagen. Esta idea, adaptada al problema de las imágenes de alto rango dinámico, consiste en medir la diferencia de iluminación entre dos regiones de la misma imagen, y comparar esta diferencia a la diferencia observada en las mismas regiones de la otra exposición. Esto nos dará la proporción de iluminación entre las dos imágenes. La ecuación que nos dará el factor de acople, F, será:

F = (Image1_region1 – Image1_region2) / (Image2_region1 – Image2_region2)

Esto corregirá los efectos de extinción atmosférica y transparencia del cielo. Pero no corregirá el cambio del nivel de iluminación del fondo del cielo. Tenemos pues que modificar esta ecuación, teniendo en cuenta el brillo del fondo del cielo. Hay que sustraer el fondo a cada área de la imagen a medir:

F = ((Image1_region1 – Image1_bg) – (Image1_region2 – Image1_bg)) / ((Image2_region1 – Image2_bg) – (Image2_region2 – Image1_bg))

Esta última ecuación nos dará la corrección para todos los factores que diferencian nuestras astrofotografías de una fotografía diurna (excluyendo, por supuesto, el ruido aleatorio). Continuaremos en un próximo post, puesto que se hace necesario mirar este problema desde una punto de vista práctico.

79
Gallery / DSLR Holmes, coma + tail
« on: 2007 November 04 22:34:33 »
Hello all,

after some months without making astrophotography, my friend Jose Luis Lamadrid and return with a photo of the so photographed comet Holmes.

It's DSLR photo (Canon 350D) with a Tak Epsilon 180ED. We have made different exposures: 4x10', 26x1' and 30x15", for a total exposure of 73'30". The comet was photographed at our habitual observing site, on the Javalambre mountains, in Teruel (Spain), and at 2020 meters above sea level.

The processing has been very hard, so this is a first version... but we've decided to publish it because its actuality. As a bonus, it is the APOD (http://antwrp.gsfc.nasa.gov/apod/astropix.html) today.

Calibration was made with DeepSkyStacker and all the processing with PixInsight.

The images:

800 pixels wide:


Full resolution:
http://datastore.astrofoto.es/holmesmax.jpg

We're very happy with the overall result... it's clearly visible the "bubble effect" of the two comas, and it shows the dynamic range from the nucleus to the faint ionic tail.


Regards,
Vicent.

80
PCL and PJSR Development / Medición de Seeing
« on: 2007 September 19 05:01:36 »
Hola a todos,

esta semana tenemos una reunión, jueves y viernes, varias universidades españolas, para seguir trabajando con el proyecto PAU (Physics of the Accelerating Univers). Se trata de un telescopio de 2,5 metros con una cámara de 500 megapixels, que se instalará en Javalambre (Teruel) dentro de unos pocos años.

Bien, el tema está en que José Luis y yo vamos a empezar las prospecciones para la localización exacta del observatorio. Estaremos unos meses haciendo mediciones de seeing, extinción atmosférica y brillo del fondo del cielo.

Pero, aprovechando la reunión de esta semana, se me había ocurrido improvisar un monitor de seeing con el Mewlon210. Básicamente será poner una máscara con dos agujeros de 2 cm delante de la boca del telescopio y, utilizándola a modo de máscara de Hartmann, dejar la imagen desenfocada, de modo que salgan 2 imágenes de la misma estrella separadas por 40 - 50 pixels. Al tratarse de una estrella muy brillante (posiblemente Vega), tendremos señal suficiente para tomar 20 - 30 imágenes por segundo a f/120, y los dos spots seguirán siendo relativamente pequeños.

Bueno, y por qué todo este rollo? Pues porque se trata de medir la varianción de la distancia entre los dos spots. Y qué mejor que hacerlo con un script de PI??? :roll:

He hablado con Juan, y me ha dicho que sería muy fácil de hacer. Se trataría de coger cada imagen y partirla en 2, quedando un spot en cada semi-imagen. Luego, con el algoritmo de DynamicAlignment, se calculan los dos centroides y se halla la distancia entre ellos. Entonces, después de haber hecho los cálculos con todo el conjunto de imágenes, se saca la varianza de la distancia en pixels. Hay que aclarar que, previamente, el vídeo tomado con la webcam estaría pasado a BMPs.

Y aquí va la pregunta caliente... Es imposible pedir que ese script esté para el viernes (es decir, pasado mañana), no?! :lol:  Lo digo porque creo que daríamos un buen toque de atención en la reunión.  :wink:  Todos se sorprenderían de que hubiéramos tenido ya medidas de seeing, puesto que van a comprar un monitor a Astelco, con su software y todo...


Bueno, esta es mi propuesta... Aunque no llegara para el viernes, creo que sería interesante un módulo así, no?
Saludos,
Vicent.

81
Gallery / ALHAMBRA Survey
« on: 2006 December 18 15:09:42 »
Hi all,

Here is my first professional work as the astrophotographer of the Astronomical Observatory of the University of Valencia (OAUV - Spain). We have published today the press release of the ALHAMBRA project. This survey is being done with the 3.5 meters telescope at the spanish observatory of Calar Alto; the objective is to have a very precise statistical data of the young Universe, as we will have about 650,000 galaxies, down to magnitude 26, on the complete image fields.

This survey is very special because the fields are being photographed with 20 adjacent filters, each one with a 31 nm. bandpass, so we will have a very precise information about the distribution of the light from the near UV to the near IR for each object.

The image I present below is from one of the four 4Kx4K CCDs of the LAICA camera, with an image scale of 0.225" per pixel. The processing work have been basically the reduction of 14 of the 20 filters (between 396 - 799 nm.) to the three primary colors, to emulate the color we would see the field with our eyes (light between 700 - 800 nm have been taken as pure red). All the processing have been done with PixInsight Standard.

You can see two small crops of the original image here:

http://c300d.pleiades-astrophoto.com/alhambra/Recorte1.jpg

http://c300d.pleiades-astrophoto.com/alhambra/Recorte2.jpg

And you can download the complete imagen and one illustration at 50% scale here:

http://c300d.pleiades-astrophoto.com/alhambra/AlhambraColor_100.jpg

http://c300d.pleiades-astrophoto.com/alhambra/Poster_050.jpg

For more information about the project, you can enter the webpage of the OAUV:

http://www.uv.es/obsast/alhambra/pressrelease.html

Or the Calar Alto webpage:

www.caha.es


All the images can be used, allways with the correct credits, of course.


Regards,
Vicent.

82
Gallery / The Moon and Messier 31
« on: 2006 September 11 14:59:34 »
Hi all,
 
this august, José Luis and me have made to new photos: the Moon
and Messier 31.
 
For the first, we made 114 individual exposures, begining with
4 second ones and ending with 1/200 sec. exposures.
 
Messier 31 was photographed with a Vixen Newton and a cheap
second hand Konus newton (125$ OTA <g>). Seeing was excelent
during the two nights, but the Vixen coma corrector is not
too sharp... and it shows some lateral color on the stars.
Definitely, we will go for another coma corrector...
 
The two photos we assembled linearly, putting the shorter
exposures over the saturated areas of the longer ones.
This makes a high dynamic range, linear image. The dynamic
range on this images is so great that we have processed
them wwith 64 bit per channel precision. In fact, the Moon
image needs more than 3 million gray leves to be represented
in a linear way.
 
Once combined, no layering techniques were applied to the
images to show the information over the whole dynamic range.
This process was done entirely with our own wavelet
techniques. All the processing was made in PixInsight.
 
For the moment, you can access the gallery through this
address:
 
http://c300d.pleiades-astrophoto.com/Gallery/Gallery.html
 
We are going to change the DNS configuration of Astrofoto.es,
so the domain will be unavailable during some days. The
address above works perfectly.
 
 
 
Well, we hope you will enjoy it.
Regards,
Vicent.

83
Gallery / M13, 12.7 h., DSLR
« on: 2006 June 09 16:39:46 »
Hi all,


this is our last work, a 12.7 hours
exposure of M13. You can view the
image at this address:

www.pleiades-astrophoto.com/M13/en.html

I've uploaded the image in JPEG, JPEG2000
and BMP formats. The problem is that JPEG
is unable to preserve the color saturation
of the nucleus, even if I save the image
at 100% quality. So don't hesitate to
download the other versions.

Also you have the image at two resolutions:
1100 pixels wide and full resolution.
In the last one, you can do a better
examination of the nucleus and view
all the background galaxies.

On the webpage you can read the technical
details. Briefly, these are: FS102 at f/8
and Canon 20D at ISO800, 45" subexposures
for the nucleus, and 10', 20' and 30' for
the outer areas. All the exposures with
an IDAS filter. All the processing done
in PixInsight.



Well, hope you will like it!
Regards,
José Luís and Vicent.

84
Gallery / NGC 2392
« on: 2006 March 16 01:22:58 »
Hi all,


two days ago, my friend José Luís Lamadrid and me made
some first images with the Mewlon 210 and the modified
300D. This is a short exposure photo, but the result is
overall nice... Only 20 x 1 minute exposures, at ISO 400,
at prime focus (f11.5).

http://c300d.pleiades-astrophoto.com/NGC2392.jpg

The preprocessing was made in Iris, and the processing
in PixInsight STD (wavelets, 20 iterations of regularized
Richardson-Lucy deconvolution and noise reduction
through SGBNR). The image is at original resolution,
so each pixel corresponds to 0.6 arcseconds.


Regards,
Vicent.

Pages: 1 ... 4 5 [6]