Hello,
I'm currently trying to do some DSLR processing using a synthetic L created from ImageIntegration.
But I'm a little bit lost about what to do after deconvolution and linear noise reduction on the Lsyn data.
When should I combine my RGB data back with my Lsyn data?
Initially I thought I was going to do some non linear processing (to bring out faint stuff and HDR/local contrast) with my Lsyn data before the ChannelCombination but then I realized that if I were to do that, I would not be able to do a RepairedHSV before a MaskedStretch.
Or should I do a a ChannelCombination with my linear Lsyn (after deconv+noise reduction for instance), then apply my RepairedHSV/MaskedStretch to bootstrap my non linear processing on the RGB data, while continuing to work on my Lsyn non linear processing in parallel? (and later on do another ChannelCombination once I have a full RGB and Lsyn non linear processing done)?
In other words, I'm quite confused about how long I should keep my Lsyn processing separated and when I should really combine them back. Please help? :-)
PS: Initially I went to the Lsyn road because it seemed a nice idea to minimize the noise and also because I wanted to bring out some faint details and it seemed to make more sense to do so with L. Correct me if I'm on a wrong path.