Author Topic: Yet another HaRGB question  (Read 13368 times)

Offline vicent_peris

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 988
    • http://www.astrofoto.es/
Re: Yet another HaRGB question
« Reply #15 on: 2010 July 08 15:03:29 »
your formula assumes that broadband R intensity is evenly distributed across the entire range, correct?

Yes, it's the only thing you can do. You must assume that you will have some residuals. But hey, as the images above, it seems to work.  :P

Once you have estimated the actual Ha signal you're making the image more red in those areas that have Ha.

Yes, but think on it as it is: I'm multiplying the H-alpha emission intensity. I'm not doing a curve adjustment putting the cleaned H-alpha as mask, or something like. I simply add the clean H-alpha image to the R one, when they are linear.

It sounds like Chris is trying to do the same thing but not doing a very good job explaining it. As far as I can tell there is no normalization for the R subtraction based on pass bands of R and Ha filters. Perhaps it was in the talk but this is so essential that it should have been in the slides as well.

Not at all. My formula is not Ha_clean = Ha - R*k, because the R image has all the H-alpha emission embedded. Thus a simple subtraction doesn't solve the problem completely.

OTOH, I really suspect that Chris is doing all the process with non-linear images. After photographing M82, I can say you that the images are not linear. I think he's doing a different histogram transformation for the R and H-alpha images. In other words, he is giving the same brightness to the galaxy body in both images, and then subtracting the R to the H-alpha image (at least, that's what my eyes are telling me after looking the slides). This is *very* risky, because you can create or modify inadequately emission structures when you subtract the R image. This non-linear processing would disagree you equation:

Ha - (Ha + other red data) = - other red data

because the images are not linear. So you can make:

Ha - (Ha + other red data) = Ha_cleaned_image


Best regards,
Vicent.

Offline vicent_peris

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 988
    • http://www.astrofoto.es/
Re: Yet another HaRGB question
« Reply #16 on: 2010 July 09 04:55:46 »
Hi again,

there's another problem with Chris' technique. His is putting some of the H-alpha data in the G and B channels to emulate H-beta and O-III emission. Actually this has no sense at all. O-III emission has nothing to do with H-alpha one: it can have completely different distribution and structures. But, furthermore, H-beta has also nothing to do with the H-alpha emission. See this example of the Owl Nebula we did last year at CAHA:



As you can see, the images of H-beta and H-alpha are completely different. H-beta is a higher energy transition than H-alpha. So H-beta emission tends to be at the inner part of the nebula, where the UV radiation from the central star is stronger. By just scaling the H-alpha by x0.2 to "emulate" the H-beta emission, we would be completely faking the image.

IMHO, this cannot be done. If you have H-alpha data, show just what you have. Don't make up things that are suposed to be there.


Regards,
Vicent.

Offline Nocturnal

  • PixInsight Jedi Council Member
  • *******
  • Posts: 2727
    • http://www.carpephoton.com
Re: Yet another HaRGB question
« Reply #17 on: 2010 July 09 07:07:40 »
I totally agree Vicent.
Best,

    Sander
---
Edge HD 1100
QHY-8 for imaging, IMG0H mono for guiding, video cameras for occulations
ASI224, QHY5L-IIc
HyperStar3
WO-M110ED+FR-III/TRF-2008
Takahashi EM-400
PIxInsight, DeepSkyStacker, PHD, Nebulosity

Offline mmirot

  • PixInsight Padawan
  • ****
  • Posts: 881
Re: Yet another HaRGB question
« Reply #18 on: 2010 July 09 08:31:19 »
You are correct Chris is using non linear imaged that are streched.

Max

Offline Ioannis Ioannou

  • PixInsight Addict
  • ***
  • Posts: 202
Re: Yet another HaRGB question
« Reply #19 on: 2010 July 10 19:50:09 »
Whoops, I just got back from a trip and I found that my question triggered a long thread !
I think Vicent clarified things enough so that even a beginner like me can understand.

What I would like to comment is why I tried this:

I also agree that, if you can only get reasonably clean' data through a single (Ha) NB filter, then why not just present that as a Mono image? Why try and incorporate poorer-quality RGB data into the blend, and then have to 'fight' to build a 'nice' image afterwards.

Because mono Ha photos tend to have a faded-out and poor continuum. The "cheating" I was talking about is "take the structure with Ha, add the continuum WITH color from many-many RGB subs, each at the limit of light pollution". You are not trying to get the nebula with RGB, just the stars and their colors. A (cheap I must admit) trick to make something out of a light polluted site -indeed an 'aesthetic' image. Even three-filter NB imaging is not enough some times from where my house is, O-III (at least at 8nm bandwith) is plagued by light pollution, while S-II most of the times is too faded. And all has to do with what you can shoot under these conditions waiting for the next time that you can visit a dark site. Emission nebulas (red subjects)  that you "know" that the subject "is supposed to be red" - as I said just a cheating, nothing more. OK, I can do mono Ha (I did) and I can try 3-color NB (I'm new on this) but "color" images just "look" better  ;)

Clear Skies
John (Ioannis)

FSQ106N+Robofocus+QHY-22+SX USB wheel+Baader filters
SX OAG+DSI Pro guiding a NEQ6
PI for the rest :)