In Harry's excellent video on using the HDR Transformation Tool, the example data is from a one shoot color camera. I'm working on my M95 image that is taken with a ST10XME mono camera using a color filter wheel and at this point the image is only an RGB image and the luminance data has yet to be applied. My gut tells me that I sould not be using HDR until the luminance data is combined.
So far the master red, green and blue images have been made and combined as a RGB image using MaxIm and saved as a IEEE floating point image. I have a fits plug-in for PS that can read color 32 bit fits files but usually convert to 16 bit as PS has functions such as curves that only work at 16 bit or lower bit depths. My usual routine is to process the RGB data seperately in Photoshop CS4 using curves, levels and sometimes saturation, and then process the luminance data with levels, curves, selective high pass sharpening and then combine the two layers into one making final adjustments to the combined image.
So in PI, I've got the raw 32 bit RGB image. Should I be combining the red, green, and blue 32 bit images in PI? Should the luminance be added at this point? My thinking is that tools shuch as HDR are probably better applied to the stronger luminance data rather than the weaker color to suppress image noise? I've been taught over the past 12 years that the color data is for colorization of the image and the details are all in the luminance thus detail processing is done on the luminance layer only. In fact, I've been using a Gaussian blur to dampen the color noise prior to adding the luminance layer. Now it seems that is about to change or not? Should I be combining the LRGB in the first steps and if so, does PI aling when the color is bin 2x2 and the luminance is bin 1x1 or do I need to resize the RGB data first?
This will be a good start but more questions will be following based on these answers.
Thanks,