PixInsight Forum (historical)
PixInsight => General => Topic started by: Jack Harvey on 2007 December 16 13:53:06
-
One of the problems with combining an Ha frame with say a Red is the FWHMs are different and you often get that bullseye appearance with the smaller Ha inside the Red star. Is there a way to quickly calculate how much to deconvolve the red frame to get a better fit with the smaller FWHM of the Ha? Thanks.
Uno de los problemas de combinar un marco con Ha decir Roja es la FWHMs son diferentes y que las que suele llegar con la aparición bullseye Ha menor en el interior de la estrella Roja. ¿Hay una manera de calcular rápidamente la cantidad a la deconvolve marco rojo para obtener un mejor ajuste con los más pequeños de la FWHM Ha? Gracias.
-
HI again Jack,
Seems like we're both stuck on this (although I wish I had your excellent images to play around with :wink: .)I know I'm no expert here so I'll just quote Vicent:
For the star size problem, we can combine both images with a Maximum operator (this operator will choose the maximum of each pair of values of both images). So as a result, we will have an image with bigger stars. BUT a part of the noise from the red image will be transferred to the information of the H-alpha nebula.
The easiest method I think now is simply multiplying the H-alpha image by a factor of >1. As the stars are very much dimmer in the H-alpha image, by multiplying this image you will end with a picture in wich the stars are of the desired size, but with the inmaculateH-alpha information from the nebula.
I hope this is what you had in mind.
Larry
-
I missed that section. Thanks I will play with that some.
-
I tried that rick and did not get the Ha star sizes to increase ntoiceably, even when using 6x???? SO maybe there is a step I am missing???
-
I tried that rick and did not get the Ha star sizes to increase ntoiceably, even when using 6x???? SO maybe there is a step I am missing???
Hi Jack,
I was talking about PixelMath, not Morphological Transform. :wink:
Good luck,
V.
PS: On that message, I simply gave some ideas, as I'm not an expert in narrowband imaging.
-
Hi Larry and Jack,
I think what Vicent was referring to is something like this applied with PixelMath:
Max( halpha, 0.7*red )
where halpha and red are your Ha and red frames, respectively. The 0.7 multiplying factor is just an example; you have to find the appropriate value by trial-error.
The expression above should be applied to the halpha image, to form the new combined red channel image. You can easily try it out on a preview.
The idea behind this expression is replacing Ha pixels where red is stronger than Ha. This happens basically around stars, since these are larger on the red image. However, as Vicent pointed out, one problem is that by doing this some noise from the red frame is transferred to the Ha frame, which is not good at all. Hence the multiplying factor in the expression above.
Another way to avoid noise transference is using a star mask when applying the expression above. Such a mask would protect the background and relatively dim regions, where we want no red pixels at all. The mask should be active for the halpha image, which is the target of PixelMath in this case.
Vicent, please correct this if I'm wrong.
Hope this helps
-
Another idea: perhaps you could simply eliminate the stars in the Ha image by subtracting them with another image of small scale made with A Trous Wavelets. You really don't need the star data from the Ha image in the first place, I would imagine. The only danger here would be eliminating small scale structures you want to keep.
Just another 2c :)
Larry
-
hum hummmm....
I think there is a big problem in this topic... As Jack pointed out, usually the stars are bigger in the broadband images. If you have compare an H-alpha image to a red image, of course you will have more nebula data in the narrowband image... BUT those tiny stars are so wonderful in this image... :wink:
I think the perfect HaRGB image would be the one with all the H-alpha data (without any noise degradation), AND with the tiny stars from the narrowband image.
Could you guys upload some test data? Sincerely, I'm not be able to build a workflow on this topic only with my imagination... I have some ideas (basically making the broadband stars to go to the size of the narrowband ones, and rescaling them by a factor to adjust their brightness to the narrowband image). I think it would be very nice for this topic to have some sample images. Perhaps a small crop of a raw image in each filter.
Regards,
Vicent.
-
I have some nice master frames on NGC 2074 which I am happy to share. I do not seem to be able to figure3 out how to send this stuff to the ftp.
-
Thanks to Juan who got me hooked up with the http://pteam.pixinsight.com/. I have uploaded the master frames for NGC 2074. The broadband frames are registered to the Ha frame. Also I got two sets of Red data (R1 and R2) with the idea of using one set to combine with the Ha and the other to use for the RGB. It is under the folder jack.harvey on the public http://pteam.pixinsight.com/.
So here we go Vincent, LD et al. - now we can play with this data<G>
BTW please credit SSRO/PROMPT if you post this data anywhere ie "this image based on data acquired by SSRO/PROMPT 2007"
---------------------
Gracias a Juan quien me consiguió conectar con el http://pteam.pixinsight.com/. He subido el capitán marcos para NGC 2074. La banda ancha de imágenes que se registraron a la Ha marco. También me dieron dos conjuntos de datos Roja (R1 y R2) con la idea de utilizar un conjunto de combinar con la Ha y la otra de su uso en la RGB. Se encuentra bajo la carpeta en la vía pública jack.harvey http://pteam.pixinsight.com/.
Así que aquí vamos Vicente, LD et al. -- Ahora podemos jugar con estos datos <G>
BTW crédito por favor SSRO / PROMPT si después estos datos en cualquier lugar, es decir "esta imagen sobre la base de datos adquirida por SSRO / PROMPT 2007"
-
Ok, thanks you very much, Jack!
I'm playing a bit with the data. For the moment, I have a Red image without H-alpha emission:
(http://datastore.astrofoto.es/Forums/PixInsight/N2074_withoutHa.jpg)
It can be very useful for making possible masks... I will continue working...
Regards,
Vicent.
-
Vincent Glad you found the data. BTW I did correct the link to the ftp in the post above so others can now access the data. Thanks again Juan!
I will be watching for developments along this front (as will LD I am sure<G>)
---------------------------------
Vicente encantados de poder encontrar los datos. BTW yo lo hice corregir el enlace al ftp en el puesto más arriba para que otros usuarios puedan acceder ahora a los datos. Gracias de nuevo Juan!
Voy a cuidarme de los acontecimientos a lo largo de este frente () 2007 "
-
Ok, I have it! :wink:
I post here my results, and I will write ASAP the method. This method is easily scriptable, so I think we will have a narrowband imagen color combination.
This is the original combined RGB data:
(http://datastore.astrofoto.es/Forums/PixInsight/RGB.jpg)
And this is the h-alpha enhanced image:
(http://datastore.astrofoto.es/Forums/PixInsight/RGB2.jpg)
For a better comparison with the original RGB image, I've adjusted the midtones of the last image to decrease the brightness of the nebula to fit it to the original RGB. You will see that the stars seem to disappear. :wink:
(http://datastore.astrofoto.es/Forums/PixInsight/RGB3.jpg)
You can control the H-alpha emission enhancement. This example is a bit extreme, I've multiplied the H-alpha signal by x12, but a lower factor like x5 works too very well too.
Hope you like it.
Vicent.
-
That is looking very good. I assume you will have more on how to do this in the future. Maybe in the far future a Ha LRGB or Ha RGB Combine function like the LRGB would be a possibility. You could plug in the Ha, R,G, B frames and adjust how much Ha and how much multiplication to dial in<G>.
Thanks for all the work VIncent - and quick too!
-
I tried Vincent's method and used an HaR that I multiplied by 10 and then combined that result with a RGB. Needed some Histogram black point adjustment, slight color saturation and curves. Finally a touch of GreyCstoration for smoothing. It is in my folder on the PixInsight file server since I do not know how to put an image on this forum.
The good news is I think Vincent has solved the star problem!
------------------------------------------------
Traté Vicente del método y utilizado un HaR que multiplicado por 10 y, a continuación, resultado que combinado con un RGB. Necesarios algunos ajustes de histograma punto negro, de saturación de color ligero y de curvas. Por último un toque de GreyCstoration de suavizado. Es en mi carpeta en el servidor de archivos desde PixInsight no sé cómo poner una imagen en este foro.
La buena noticia es que creo que Vicente ha resuelto el problema estrellas!
-
Ok, let's go.
The basic idea es to decrease the brightness ratio stars/nebulas. If you see at both the R and H-alpha images, you will see that the fwhm is very similar. The only difference is that in the H-alpha image the stars are very much fainter. This is why, after raising the midtones in both images, the stars appear much bigger in the red image.
So the key is combining the red image with a multiplied version of the H-alpha image.
==========
The first step is to have the background level controlled. We will force a background level of 0.05 with a simple formula in PixelMath. We will extract a small preview that is perhaps representative of the sky background (in this image, I have selected a small rectangle near the left - top corner). Supose we are working on the H-alfa image:
Ha - Med(Ha_background)+0.05
Ha = the H-alpha image
Ha_background = the sky background area in the H-alpha image
We substract the median value of the sky background to the H-alpha image and then we add a little pedestal of 0.05. Remember to disable the Rescale checkbox.
We will make this operation with the other images (the R, G and B components), so the four images will have a sky background level of 0.05.
==========
We have to make two RGB images. The first is a normal combination with our broadband images. In this case, I have asigned simply a proportion of 1:1:1.
The second RGB image will be the enhanced H-alpha. Let's see how we are going to make this improvement...
We will multiply the H-alpha by a factor X, through this simple formula:
(Ha * X) - (0.05 * (X - 1))
The second part of the formula is simply to maintain the sky background level at 0.05. Remember to disable the Rescale checkbox.
After this operation, we will mix the H-alpha image with the broadband R component, through a maximum operation:
Max (Ha, R)
This will be our new R channel for the second color combination.
==========
This is the H-alpha enhanced when we raise the midtones level:
(http://datastore.astrofoto.es/Forums/PixInsight/HaRGB.jpg)
It seems really horrible, but don't panic. We are going to recover the correct chrominance from the first RGB image.
We will make the same histogram adjustment to the first RGB image and we'll extract the a and b channels of the Lab color space, with the ChannelExtraction tool.
Then we will insert these components in our H-alpha enhanced color image, with the ChannelCombination tool. You can see the result in my last message.
Of course, this image is not lineal at all. But you can return the image to a lineal state with a second midtones adjustment of one minus the first one.
That's all!
Regards,
Vicent.
-
I forgot to say that you can change the RGB working space (with the RGBWorkingSpace tool) with a Gamma value of 1.0, otherwise you will get the classical pink and unsaturated H-alpha enhanced color image. Make this with Photoshop. :wink:
Regards,
Vicent.
-
One note more... I have been making some refinements to this technique, resulting with much more accurate color representation. We will implement this refinements in the script... For the moment, the final composite:
(http://datastore.astrofoto.es/Forums/PixInsight/RGBfinal.jpg)
Vicent.
-
I have gotten all the way to the combine of the two RGBs (one without Ha and the other with). The channel extract of a and b from the RGB without and combination of a nd b channels to the RGB with Ha does not give me a good image. I get blue stars etc???? So either I screwed up in all the math (don't think so) or there is more to combinging the RGB with the Ha_RGB.
Or I wait for the tool to do this??
-
Don't worry, Jack, I'm writing a small tutorial, I hope to have it this night. I know the instructions I gave yesterday are demanding some screenshots. By the way, I'm refining the method...
See this new image... It's the result of applying a H-alpha gain of x20. Of course, the stars are somewhat bizarre, because you don't have sufficient luminance support for them, but the image is IMHO really spectacular:
(http://datastore.astrofoto.es/Forums/PixInsight/RGB4.jpg)
We are going to do a module for PixInsight in the next weeks, inmediately after the launch of PixInsight. This module will not only integrate H-alpha with RGB, but other narrowband filters too, like O-III, H-beta or S-II. Be prepared. :wink:
Vicent.
-
You mean just when I'm beginning to get a grasp on the complexities, you're going to make it easy? :wink:
This continues to be an interesting topic with astounding results, and once again shows the great possibilities of PixInsight.
Regards,
Larry
-
A module to integrate narrow band into broadband would be unique in the world of processing software.
-
Finally seems that the processing required is harder than I thought at the first moment, so please be patient... I think we will release two separate tools. One for generating a map of broadband emiting objects in the image, and another one to integrate broad and narrowband images.
Vicent.
-
I am still playing with this also. BTW in the first step where you control the background using a small part of the sky which you select as a retangle, could you not just use ABE and then
Ha-Med(Ha_ABE)+0.05
-
Vincent I posted my fianl image using this technique on the PixInsight ftp in folder jack.harvey. THis is a promising technique
-
Hi Jack,
you ABE idea looks very good. And can be better for making all automatic. Thanks you!
My problem now is noise transference from the R image to the H-alpha one. In your image, the H-alpha signal in the R image is nearly as good as it is in the H-alpha image. But if you add some noise to the R image (PixInsight has a module for adding noise), H-alpha will be degraded... For the moment, I have not a good solution. :cry:
Vicent.
-
Hmmmm Not sure I have a quick idea on this one? Buena Suerte!
-
At least, if you have good enough red data, this method gives you a way to raise the nebulas over the stars.
Vicent.
-
Actauly I do have new Red data that I got last night. In fact it was a great night and I got all new data. I can leave you on the ftp the new data under a folder named New 7024 IF you want it. It will be there.
-
Ok Jack, this would be great. One more thing... could you upload a R image with bad S/N ratio? I need it for experimenting.
Now, I think I have the solution... This is the RGB with noise added to the R channel:
(http://datastore.astrofoto.es/Forums/PixInsight/RGBnuevo.jpg)
And this is the new HaRGB:
(http://datastore.astrofoto.es/Forums/PixInsight/HaRGBnuevo.jpg)
As you can see, now there isn't any noise transfer from the R to the H-alpha signal. :wink:
I hope this will work with other images... But now it's a pretty complicated technique, so I think can be better to wait for the module...
Thanks you a lot, Jack.
Regards,
Vicent.
-
I uploaded a Mean R1 which is from the original data set and should have the noise??
-
Man, you are so lucky... :lol: You have an incredible dark sky at CTIO, so your H-alpha data in the R1 image is almost as good as in the H-alpha image. :lol: Could you upload a combination of perhaps two 900 seconds red images? I want noise, I need noise. :lol:
BTW, now your image is far better I think. I would do a star shaping, and perhaps it would be possible to raise a bit the fainter parts of the nebula.
I think I have now the method to combine H-alpha with RGB. Next step I think must be test the method with H-alpha, O-III, S-II and RGB images together.
Thanks you,
Vicent.
-
OK, I selected 3 Red Frames and uploaded the raw uncal frames BUT also uploaded a calibrated and registered mean of the 3. If you need more let me know.
On the star shaping - I was actually thinking of a new thread on this and hope you will comment.
-
Ok, I'm going to give a try with all the new data! If you want, I can upload the final HaRGB result with all the R data and the enhanced H-alpha emission, ready for processing.
Once finished this experiment, I would propose another one... I want to try to combine RGB data with more narrowband filters. Do you have any images with H-alpha, O-III and, perhaps S-II?
Best regards,
Vicent.
-
Please do upload so I can work also<G>. Unfortunately I do not have SII or OIII data. We share the telescope at CTIO with some real astronomers (scientists) and they have clutered up the filter wheel with things like g', u' B, U etc<G>. SO we only had room for LRGBHa.
-
Using the background extractions (ABE or one of the others) can you set it to automatically perform the operation you first gave us
ie Ha-Med(Ha-Background)+0.05
I assume you only need to find a way to have the ABE subtract the Background and then add the 0.05. Or I guess just let the ABE subtract the background anad then us Pixel Math to so Ha + 0.05 ??????
-
Hi Jack,
I'm working with your new data. It's perfect because the H-alpha image is a lot deeper than the R one. I think we're going to have a super - tool for narrowband imaging; thanks you another time!
But there is a problem with the last image combination. You've done a simple average, so the images are full of silly pixels. :lol: Could you make a sigma clip image combination? You have also a better image integration algorithm in PixInsight, it's one of the last scripts Juan released, it's explained in one of the last tutorials (the one of NA Nebula). If you have problems, I can combine the data for you and send back the four channels.
Vicent.
-
If you're going to apply the script from the wavelets tutorial, please use this improved version:
/**
* A simple script to perform average image integration with asymmetric k-sigma
* pixel rejection and generation of rejection map images.
*
* From the processing example:
* The Region Around NGC 7000 and IC 5070: ATrousWaveletTransfrom and
* HDRWaveletTransform in PixInsight, by J. Conejero.
*/
#include <pjsr/UndoFlag.jsh>
// Base identifier of the images to integrate
// The script will try to integrate images such as ngc7000_1, ngc7000_2, ...
#define BASE_ID ngc7000
// Identifier of a preview to integrate. Leave it empty to integrate the
// whole images.
#define PREVIEW_ID /*Preview01*/
// Number of images to integrate.
#define IMAGE_COUNT 3
// Kappa value for rejection of bright pixels.
#define KAPPA_BRT 0.12
// Kappa value for rejection of dark pixels.
#define KAPPA_DRK 0.25
/**
* Script entry point.
*/
function main()
{
// Gather images and working dimensions.
var width = 0;
var height = 0;
var images = new Array;
for ( var i = 0; i < IMAGE_COUNT; ++i )
{
var id = #BASE_ID + '_' + (i + 1).toString();
if ( (#PREVIEW_ID).length )
id += "->" + #PREVIEW_ID;
var view = View.viewById( id );
if ( view.isNull )
throw new Error( "No such image: " + id );
images[i] = view.image;
if ( i == 0 )
{
width = images[i].width;
height = images[i].height;
}
else
{
// All images must have the same dimensions
if ( images[i].width != width || images[i].height != height )
throw new Error( "Incompatible image dimensions: " + id );
}
}
// Integrated image in 32-bit integer sample format.
var intWindow = new ImageWindow( width, height, 1, 32,
false, false, #BASE_ID+"_integration" );
var intView = intWindow.mainView;
intView.beginProcess( UndoFlag_NoSwapFile );
#define integration intView.image
// Pixel rejection map image - bright pixels.
var brtMapWindow = new ImageWindow( width, height, 1, 8,
false, false, #BASE_ID+"_rejection_map_bright" );
var brtMapView = brtMapWindow.mainView;
brtMapView.beginProcess( UndoFlag_NoSwapFile );
#define brtMap brtMapView.image
// Pixel rejection map image - dark pixels.
var drkMapWindow = new ImageWindow( width, height, 1, 8,
false, false, #BASE_ID+"_rejection_map_dark" );
var drkMapView = drkMapWindow.mainView;
drkMapView.beginProcess( UndoFlag_NoSwapFile );
#define drkMap drkMapView.image
// Enable and initialize status monitoring.
integration.statusEnabled = true;
integration.initializeStatus( "Image integration", width*height );
// Allow our users to abort the operation.
console.abortEnabled = true;
try
{
// For each row
for ( var y = 0; y < height; ++y )
{
// For each column
for ( var x = 0; x < width; ++x )
{
// Gather the stack of source pixels for the current coordinates.
var srcPixels = new Array;
for ( var i = 0; i < IMAGE_COUNT; ++i )
srcPixels[i] = images[i].sample( x, y );
// Calculate the median of the source stack. This is an excellent
// estimate of the central peak's position on the histogram.
var m0 = Math.median( srcPixels );
// Perform pixel rejection
if ( m0 != 0 )
{
// Prepare to gather unclipped pixels.
var pixels = new Array;
var brtCount = 0;
var drkCount = 0;
// For each source pixel
for ( var i = 0; i < IMAGE_COUNT; ++i )
{
// Relative distance of this pixel to the central peak
var d = (srcPixels[i] - m0)/m0;
// Reject or accept this pixel
if ( d < 0 )
{
// This is a dark pixel
if ( d > -KAPPA_DRK )
pixels.push( srcPixels[i] );
else
++drkCount;
}
else
{
// This is a bright pixel
if ( d < KAPPA_BRT )
pixels.push( srcPixels[i] );
else
++brtCount;
}
}
// Update rejection maps.
brtMap.setSample( brtCount/IMAGE_COUNT, x, y );
drkMap.setSample( drkCount/IMAGE_COUNT, x, y );
// Integrate the set of surviving (unclipped) pixels, or use
// the median of all source pixels if all pixels were clipped.
integration.setSample(
pixels.length ? Math.avg( pixels ) : m0, x, y );
}
else
{
// In case we have an all-zeros pixel stack
brtMap.setSample( 1, x, y );
drkMap.setSample( 1, x, y );
integration.setSample( 0, x, y );
}
}
// Invoke the garbage collector after processing a whole row of pixels.
// A good idea if we are integrating big images.
gc();
// Update the status monitor.
integration.advanceStatus( width );
}
// Done with target views.
drkMapView.endProcess();
brtMapView.endProcess();
intView.endProcess();
// Show them.
intWindow.show();
brtMapWindow.show();
drkMapWindow.show();
}
catch ( x )
{
// Kill them.
drkMapView.cancelProcess();
brtMapView.cancelProcess();
intView.cancelProcess();
delete intWindow;
delete brtMapWindow;
delete drkMapWindow;
throw x;
}
}
main();
This version fixes a small problem that the original script had with an even number of integrated images. I still have to update the script on the tutorial, too...
-
A mean that has been Sigma Clipped of the top 2% of pixels is in the jack.harvey folder. It is a mean of the same 3 frames
-
I do usually use CCDStack in that I have used it for a few years and it works quickly for me because of my familiarity. It quickly takes me from calibration to Registration, Normalization, Data Rejection and then COmbine. I use it only becasue I am used to it.
I do not know much about running scripts so will have to figure out what to do with the script Juan has here and then how to run it.
-
Hi Jack,
Of course this little script is not intended to replace a full-fledged (and excellent) preprocessing application as CCDStack!
This script is useful to combine a small number of images, say no more than eight or ten. It has a nice feature: it generates two rejection map images. These maps are nonzero for pixels that have been rejected by the k-sigma clipping procedure. Two maps are generated: one for bright clipped pixels and another for dark clipped pixels. These maps are useful because they allow us in evaluating how good are the clipping parameters.
In the original tutorial (http://pixinsight.com/examples/wavelets/NGC7000/en.html) we used the rejection maps to increase the signal-to-noise ratio of the combined image. In this way we achieved a combined image that has all strong points of both combination methods: the good pixel rejection of median combination and the high SNR of mean combination. The results of this method are usually better than the results of most sigma-clipping implementations, but only for small sets of images.
Scripts are extremely powerful in PixInsight. To run this script, do the following:
- Copy the source code.
- Open the Script Editor window in PI
- Select New > JavaScript Source File
- Paste the source code.
- Open your images. They must be registered and have all the same dimensions. Change their identifiers so they have a common prefix and a running postfix, starting from "_1". For example: NGC1234_1, NGC1234_2, NGC1234_3, ... and so on.
- In the script, change the value of BASE_ID from its original value ("ngc7000") to your prefix ("NGC1234" in the example above).
- Set IMAGE_COUNT to the number of images. Must be larger than 3.
- The KAPPA_BRT and KAPPA_DRK values are the pixel rejection points for bright and dark pixels, respectively. They must be fine tuned by trial-error. A lower kappa means more rejected pixels. You must find the largest values able to clip out all hot/cold pixels, gamma rays, plane trails, etc.
- To run the script, select Execute > Compile & Run from the Script Editor (or press F9). When you're asked to save the script, use any location and name you like (e.g. "kappa-sigma" is a good choice).
Have fun! :) If you want to speed up the script to find KAPPA_BRT and KAPPA_DRK, you can use a small preview defined in all images (warning: it must have the same position and dimensions in all images). To use this option, change this line in the script:
#define PREVIEW_ID Preview01
assuming that the preview's id is Preview01.
It seems tricky and complex, but believe me, once accustomed it's quite easy to use.
-
Thanks for taking time out to educate me!
-
My pleasure Jack
David Serrano wrote another kappa-sigma integration script, including a graphical user interface:
http://pixinsight.com/forum/viewtopic.php?p=920#920
David? :D
-
Hi Jack,
could you upload the H-alpha image combined with sigma clipping??
Thanks,
Vicent.
-
GO to jack.harvey NEW 2074 or to NGC 2074 for Ha work, botrh of these master frames were Sigma clipped prior to median combine
-
Hello all,
well, almost all the work is done. Now I'm going to write the article describing the method. These are the results:
- The RGB image, with histogram adjustment, HDRWT (6 layers, 1 iteration and luminance mask option active), and a mild color saturation raising:
(http://datastore.astrofoto.es/Forums/PixInsight/NGC2074_RGB.jpg)
- The HaRGB image with a x12 gaing in H-alpha. The processing steps are the same than in the above image:
(http://datastore.astrofoto.es/Forums/PixInsight/NGC2074_HaRGB.jpg)
Jack, I've uploaded the combined FITS file to your directory for your work. Let me some days to write the article, ok?
After this article, I will investigate in applying this method to more narrowband filters... Hope this will work too!
Best regards,
Vicent.
-
Good result and I am looking forward to tutorial and any process tools that come from this. I really like the color of the Ha Image. I used that same technique wioth a re do of another image and extracting the Lab a and b and inserting the Lab a and b from the regular RGB is a cool trick!
-
Good result and I am looking forward to tutorial and any process tools that come from this. I really like the color of the Ha Image. I used that same technique wioth a re do of another image and extracting the Lab a and b and inserting the Lab a and b from the regular RGB is a cool trick!
Yes, it's a good trick. :lol: But it will contaminate the H-alpha signal with noise from the R image, so you must have really good R data to do this.
The article will have two parts. The first will have a bit of theory and will explain the basic principles of the algorithm, and the second will cover the practical application of the method. Tomorrow I have written the 50% of the first part.
Regards,
Vicent.