Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Carlos Milovic

Pages: [1] 2 3 ... 145
1
Gallery / Re: 2019 Solar eclipse from Punta Colorada, Chile
« on: 2019 July 11 11:36:34 »
Wouter,
Make sure to enter the coordinates of the center of the moon in the X-center and Y-center parameters. Play with the radial and angular increments to get the desired result (in terms of the scales affected by the filter... I used zero for the radial one, and 0.5 or 1 for the angular one). Then modulate it with the amount. The deringing algorithm is pretty naive, but it helps to prevent dark artifacts.
To protect the moon just extract the luminance, and use it as a mask. You may fine-tune it using curves, etc. :)


Rob,
I think that a gradient inside the moon will decrease these bright artifacts because you will be transforming a sharp edge into a steep continuous gradient. Worth a try. ;)

2
Gallery / Re: 2019 Solar eclipse from Punta Colorada, Chile
« on: 2019 July 10 14:32:38 »
Hey Wouter, you got a very nice result there, especially for the moon's surface. You may want to try the Larson-Sekanina filter to try to bring a bit more of the details in the corona.


Rob, there are also a few other problems I didn't mention. For example, I have a very noticeable ringing-like artifact on the surface of the moon. I believe that it might be some interference pattern coming from the actual PSF given by the limited aperture of the lenses. That is one of the reasons I kept the moon so dark. :)

Yes, the scale separation stuff is very important (and powerful!). You may take a look at a paper we made (https://doi.org/10.1179/1743131X15Y.0000000028) where we compared a few HDR alternatives. Those based on scale separation performed very well. And we have been doing starless+stars decompositions for processing for a long time now, with great success.
Druckmuller's results are amazing. I followed some pages and found a paper where a method is described. I will try to read it soon, but I'm afraid I don't have much time to implement something along these lines.

A trick I found processing 2010's eclipse data was to actually replace the moon with a "bright extrapolation" of the corona. I used DBE to extrapolate the brightness profile and replaced the moon with it. This way HDR algorithms work much better, without external ringing or other artifacts. I didn't try this trick for this new data, though. Now I'm a bit overloaded again with work at the university, so I will disappear soon. ;)

3
Gallery / Re: 2019 Solar eclipse from Punta Colorada, Chile
« on: 2019 July 10 07:54:04 »
Wouter,

I don't mind. ;) Yes, everything was done in PixInsight.
For this one, I followed an unorthodox pipeline. Raw files were loaded with in-camera white balance and no back point correction. I substracted the bias from the frames, and then rescaled each channel by the maximum value (I calculated it for the longest frame, and used the same factors for all frames). To avoid SNR issues, I applied a mild TGVDenoise process to the linear data. Then, frames were merged manually, using PixelMath (I was having some artifacts with the HDRCombination tool). The equation I used was: $T*0.17*(1-$T)+$T*im2. The 0.17 factor comes from the time difference between 2 frames (I used more decimals). I worked over the largest exposition, converted to 64bits floating point to avoid losing any data due to rounding. The im2 image was updated each time, to add consecutively shorter frames, one at a time. In the end, I divided by the scaling factors to recover the white balance.
Regarding the alignment of the frames, I didn't do anything. :) Please see my reply to Rob below for more details.

For the HDR compression, I also used an unorthodox method. ;) I combined results using a scale separation approach and the multiscale gradient domain compression tool I wrote a few years ago (with a much lower weight).
The scale separation approach bases its rationale in the same approach as the Homomorphic filter. In general, images are composed of illumination and reflection components. These are multiplicative effects. By taking the logarithm they become additive terms, easier to isolate. Whereas the Homomorphic filters models illumination as the low-frequency components, a multiscale approach uses spatial filters. Instead of wavelets, I used ACDNR to produce a blurred version, with a protected moon-corona boundary. By taking the difference with the log-image I was able to get both large scale (the blurred version) and small scale components. I processed each separately to compress the range of values in the corona/sky background, and to enhance the details (flares, corona, etc). Here wavelets and the Larson-Sekanina filters played a major part.
After the combination of the large and small scale components, I returned to the "linear" range by exponentiating, and then I sed Histogram and curves transforms to fine-tune the appearance. I also increased the color saturation for the solar flares only. Finally, since I got a better result for the appearance of the moon using the gradient domain compression tool, I used a mask to put a stronger contribution from that result in the final image.


Rob,
There where two dim stars. Unfortunately, they were also very soft, and not visible without heavy stretching (and visible only in the longest exposition). I couldn't also use them for PSF estimation, as the PSF was also different between frames.
If I were to merge frames from different sets, I would have used them to align, but this was not the case here. Thanks to the fact I had a decent polar alignment, I didn't have the need to align these particular frames. Due to the wind and maybe the periodic error, I had some movement in other sets. FFTregistration did a fair job with those, although the moon moves a little bit. The Canon 80D camera allows doing 7 bracketed frames automatically, so the movement of the moon if minimized a lot.

I was having some posterization effects in my earliest attempts. I figured that the reason was a slight mistmatch between the linear factors that was calculated by HDRCombination and the theoretical ones, and also the fact that it uses a monocrome mask. To minimize these problems, I did it manually as described to Wouter. Instead of a strong mask, I preferred to use the same data to mask the incorporation of the shorter frames. Since they had been already denoised, it did not degrade the data in a significant way.
To deal with the "pink stars" effect I introduced the scaling factors in the pipeline. But, the problem was not only that. The saturated region varied in size between the channels. A single monochrome mask cannot do the job here.



Thank you both for your compliments!



4
Gallery / 2019 Solar eclipse from Punta Colorada, Chile
« on: 2019 July 09 15:54:43 »
Dear all,

This is the final version of the 2019 eclipse with my data. HDR composition using 7 bracketed frames (1/8000s to 4s) with a Sigma 150-600mm at 400mm and a Canon 80D camera. Tracking with a CG5 mount. Hope you enjoy it!

5
If you are using the mask, it is not the same as removing the object as it wasn't there. Let me explain further, if you have a very strong light source (i.e. a star) and you have no edge protection, it will be blurred and the surroundings will have larger intensities. If you use a mask to protect the star, you'll blur the image, and then replace the star with the old data. You'll end up with the same increase in brightness of the background.
So, what you need to do is to fine-tune the edge protection, to avoid such artifacts. Using the mask as Local support would also be beneficial since it acts internally to avoid the data "leakage". It is different to a standard mask.

6
General / Re: Additive Stacking?
« on: 2018 February 26 10:15:08 »
Actually, this is not true. When you add two images, noise is also increased. Always. When two Gaussian distributed signals are added, the variances are added also. Since the signal is "doubled", the SNR becomes: 2*signal / sqrt( 2*var), and here comes the famous sqrt(2) improvement. Averaging and adding together is the same thing because both the signal and noise (standard deviation) are scaled by the same factor.

7
Announcements / Re: New RAW Module - Ready for Testing
« on: 2018 February 09 10:40:53 »
My bad. I was sure I saw the option to save as dual pixel somewhere in the options. :(

8
Announcements / Re: New RAW Module - Ready for Testing
« on: 2018 February 07 09:45:53 »
Great!
Indeed, this demosaicing algorithm looks pretty good! :) I will try the new functionality.

I bought recently a Canon 80D (that is not modified, and will not be). It also has the DP sensor, and it intrigues me a lot how they are using this to micro-adjust the focus. I will try to take a deeper look into this, now that we may actually read it. :D


BTW, I'm due to defend my thesis mid-April. Right now I am finishing the writing of my 3rd paper, that will be the last chapter of my thesis. I hope that I will have more free time soon.


9
Announcements / Re: New RAW Module - Ready for Testing
« on: 2018 February 06 12:42:51 »
Hi Juan!

Does the new implementation support Canon's dual pixel raw frames? Have you seen that technology?

10
Hi David

I would say that both. The algorithm is not designed for 1D data (although it is feasible if we rewrite the equations with that in mind). Since there are discrete differences in both dimensions involved, somewhere it crashed because it did not find data (hence the bug... it should check that the image contains more than 1px at both directions).

Could you replicate the data 2 times, to have at least 3 pixels wide? Then you could extract the central line, as it would be the desired result.

11
General / Re: TGVInpaint for star removal?
« on: 2017 June 20 02:30:40 »
Hi guys,

I'm working on a replacement module that uses a different algorithm to solve the functional. In practice, this means that some of the parameters will change the behavior, while the whole algorithm will converge much faster to the solution (expect 5 to 10 times faster solutions). Also, the new module will have an option to use the classic TV regularization, wich is also faster (approx 3 times compared to TGV), and may help to find the initial parameters for TGV. TV will create piece-wise constant images instead of piece-wise smooth as in TGV... while this is usually not good, because of staircaising artifacts, the results may look sharper, and the background flatter.

I have not tested an inpainting implementation with the new algorithm, but I can't see why it should not work.
drmikevt, inpainting is the name used to the techniques that reconstruct missing data from images. You define a mask, and the algorithm completes the missing information using prior knowledge given by the algorithm. In the case of TGV, this is that the image is piece-wise smooth, to it will try to follow the image's gradients to complete the voids. Not the best solution for inpainting (it does not have information about the actual texture), but works well for relative small areas, or when noise is low.
,

12
Wish List / Re: Radon transform
« on: 2017 May 10 14:18:49 »
Well, it can be done without too much trouble (it needs a bit of coding).
The direct Radon transform is equivalent to the Hough Transform (you may use the module I wrote, and use it with a small enough step size for the angle and radius).

To calculate the inverse Radon transform, more work is needed. If you take the 1D Fourier Transform in the radius axis (for each line that defines a different angle), then the result is equal to a radial profile of the 2D Fourier Transform of the image. You just have to map the 1D FFT of the Radon (Hough) transform into a cartesian coordinate system, and finally, take the inverse FFT. This is quite fast, but some artifacts may arise from the mapping process.

Another way to solve this is to use a filtered back-projection. This is fast enough and conceptually easy but is prone to artifacts and noise amplification.

Anyhow, the inverse Radon transform is an ill-posed problem. Noise amplification and artifacts are to be expected. Iterative processes may yield much better results (for example using TGV as regularizator) but are significantly slower.


I would say that there are better ways to solve both of these problems than the Radon transform.

13
General / Re: optimized image subtraction
« on: 2017 May 10 08:07:25 »
It will multiply the signal and add a pedestal so the RMSE between them is minimum.
If you have different PSFs due to changes in the atmospheric conditions, then there is no way to match them (other than degrading one image).
Also, due to Poisson statistics, I would say that an error map (difference) will still show greater variance inside/around the stars.

14
General / Re: optimized image subtraction
« on: 2017 May 09 11:56:32 »
Linear fit?

Pages: [1] 2 3 ... 145