Author Topic: Luminance data from imaging vs Luminance data from Pixinsight L extraction?  (Read 632 times)

Offline Rudy Pohl

  • Newcomer
  • Posts: 46
Hi there,

I am still fairly new to astrophotography and use a Skywatcher HEQ5 Pro mount and PHD2 auto-guiding. For the foreseeable future I will be limited to imaging with only my Nikon DSLR camera which has not been astro-modified. Even so, I have been able to produce some half-decent, fairly pleasing images. I am now looking for ways to enhance the overall appearance of these images despite the low quality of the data as opposed to having good quality L Ha R G B data shot with a CCD or CMOS astro-camera.

So I was wondering if I could extract the Luminance data from my DSLR RGB images in Pixinsight using the Extract CIE L command, and then using that extracted L image as if I had create it by actually imaging my target with a Luminance filter in the field? I realize that it could not possibly come close in comparison to a real Luminance file, but could there be at least some improvement achieved? If not, why not?

What are the main differences between these two types of Luminance files, one having been extracted from the RGB file and one having been imaged in the field?

I have searched high and low on the web for an answer, but so far have come up empty.

Thanks very much for your time,
Rudy



Offline pfile

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 4729
you can do this, though i think in the past juan has warned that L extraction from an RGB image is by default a "perceptual" L, meaning the channels have been weighted before summing into an L image to match the perception of the human eye. that is, green is over-weighted.

if i remember correctly, you need to set the RGBWS (RGB working space) luminance coefficents on the RGB image to 1,1,1 and set the gamma to 1.0 using the RGBWorkingSpace process before extracting L*. you should then undo the RGBWorkingSpace process on the RGB image and proceed from there (or just make a clone of the RGB that you modify the working space on, then extract L* from that.)

maybe others remember better. there might be a subtle difference between CIE L and L* here that i have forgotten.

rob

Offline sharpie78

  • Newcomer
  • Posts: 30
Hi Rudy,

I would go ahead and extract the Luminance if I were you. I use a OSC ZWO CMOS camera and have noticed an improvement in my images since I have started doing it, although as rob says...it is "perceptual". The data is not a true Luminance image as it has been processed to some degree and lacks the full resolution and detail of a mono Luminance image.

The main differences between an extracted Luminance image and a Luminance image captured with a mono chip are processing, filtering of bandwidths of light and the bayer matrix on our OSC chips.

All imaging chips are mono but the OSC chips have 4 tiny filters covering each pixel in a 2x2 square arrangement (the bayer matrix), one red, two green and one blue. Sometimes they are laid out RGGB, GRBG etc so make sure you have the correct one in your settings when capturing/processing. If you don't the image will come out with incorrect colours. In daytime photography with your DSLR, your camera does the colour processing automatically and usually outputs a nice colour accurate jpeg. Essentially the camera/software does some clever stuff and determines what colour each pixel should be by determining the amount of signal the chip received through the tiny filters.....(basic explanation). If you look at the raw unprocessed files that we capture for astro images they will be B/W not colour, this is because they have not been debayered (assigned colour) yet.

A mono chip obviously does not have the bayer matrix covering each pixel which is why mono imagers have to use either RGB or Narrowband filters to give their images colour. When they capture their Luminance images they collect the "True" Luminance signal as they do not have any tiny filters impeding the light path to the imaging chip. That is why I said they get the full resolution Luminance.......someone once explained to me that if you have a 20 Megapixel OSC camera, effectively it is a 5 Megapixel camera (each pixel having 4 filters......20/4=5) thus reducing resolution. That's not a technical but more of a figurative way to understand it btw.

Your DSLR also has another couple of filters in front of the chip. One to block UV/IR and an anti-aliasing filter which further reduces the bandwidth of light that the chip can detect. When you have a DSLR astro-modded, these filters are removed/replaced, allowing bandwidths of light to be detected that are undesirable in daytime photography but desirable in astrophotgraphy, namely but not limited to, Ha.
With these filters in place your chip will not detect these blocked bandwidths of light and will result in less detail. Hydrogen Alpha is one of the most common wavelengths of light in Nebulae so it is definitely advantageous for your camera to be able to detect its photons.

A mono chip will detect the photons from ALL bandwidths of light (unless blocked with filters) which results in Luminance images captured with mono chips containing more data and therefore detail.

In an extracted Luminance image you are never going to get all the detail of a true mono Luminance image as explained above. Using an extracted Luminance is still a good idea in my opinion though. As I said I have noticed an improvement in my images since I started doing it and the only way to know if it'll help yours, is to do it and experiment. If you need a workflow for it just let me know.


Jack