As promised, here are the processing details (sorry, I'll post them in english only).
- Frame Registration
I used 8 different shots for this image, each one of them scanned 4 times (to reduce the noise from the scanner). So, I ended with 32 frames, each one of them of nearly 125Mb at 16bits.
I used the first of them as reference, and aligned all the images with DynamicAligment. Each procedure used nearly 40 pairs of stars. To speed up the task, after setting all the reference/target stars for the first pair of images, I created a new process icon with this DA instance. For the next pair of images (using the same reference, changing just the target) I imported the icon, so the same star pairs were created, and I just had to quickly review them to make sure that the stars had been selected propperly.
- Integration
Previous to the integration itself, I reduced the number of images by merging all the frames that corresponded to the same exposure (the multiscans). This was done with the median funcion in the PixelMath process, saving the results as 32bits files. Becouse of the large amount of resulting frames and sizes (8 frames, each 250Mb), I went back to an old release of the PixInsight Standard Alpha, wich included the ImageIntegration process (not yet included in the current release). There I used the sigma clipping algorithm to integrate the images.
- Fixing Chromatic Aberration
Again in the current release, I extracted all RGB channels and aligned the Red and Blue ones using the Green as reference. More than 200 samples were used. After importing the results, I cropped the image to the intersection of the previously stacked frames.
- Calibration
The next step, was to flat field the image. Since I did not take calibration frames, and there are very few pure background sky zones, I used the DynamicBackgroundExtraction process. I put a few sample boxes in the lower right quadrant, all of them using axial simetries (7 or 8 corners).
This background model was applied with a non linear division, using the Divide process (linearization mode).
- Dynamic Range Boundaries and Stretching.
Since I trust in an statistical approach to this problem, I used the AutomaticHistograms process to set the dynamic range boundaries. Clipping factors of 0.015% for the shadows and 0.002% for the highlights were used.
Later, I adjusted the middtones balance manually in the HistogramTransform to balance the colors and stretch the image. This was done by inspecting the RealTimePreview appearance and reading samples on free background sky sectors.
- Image brightness/contrast/saturation.
Done with CurvesTransform. First, I fine tunned the balance of each channel adjusting slighty with S shaped curves. Then, adjusted the brightness and contrast with the CIE L channel. Finally, adjusting the CIE c channels increased the saturation of the colors. All adjustments were previewed at the same time in the RTP window.
- Further non linear stretchings.
The PIP function in the ExponentialTransform is a good candidate to increase the brightness of faint nebulosities while keeping constant the highlight features. Also, a small amount of bluring helps to prevent noise amplification. After that, new curves adjustment were applied this time only to the CIE L and c channels.
- Noise Reduction
The first process on the list was GREYCStoration. Masking with an inverted luminance, this process makes an excelent job removing bright noise patterns on the background, and evening the "surfaces". The mask not only protects the stars from beeing "flattened" or oscured, but it also protects high signal to noise ratio regions.
After GREYC, I used a conservative luminance smoothering with ACDNR. More agressive was the chrominance noise reduction, where GREYC usually is less efficient. No mask used.
- Sharpening
Some of the faint features showed a blured appearance, so, masking again with the luminance, I applied the Richarson-Lucy algorithm of deconvolution. The result was sharper stars, where theyr sizes were not increased as with other sharpening approaches, but the flux was concentrated in a narrow peak. Most of the medium/large sized stars were protected and showed no changes. This procedure also improved some dark features, and enchanced some bounderies, but minimally.
- Histogram and Curves... again
Before going with further improvements, due to the noise reduction, some zones at the end of the dynamical range are unused. So, new histogram and curves changes were applied, following the same rules as before.
- Wavelet processing
The first step on wavelet processing is to separate the stars and high frecuency features from large scale components. There is an excelent tutorial made by Vicent Peris that shows how to do that, and recently Carlos Sonnenstein published an slide presentation following a similar way. My approach used morphological filters instead of wavelets to delete the stars, but basically is the same (althought, I believe that morphological prefiltering yields better results than the pure wavelets approach).
The small scale image was rescaled and modified with curves. I let the stars to ocuppy a wither dynamic range than the dark features, so later they'll be thinner. Also a mild minimum filter was applied to get slighty smaller stars.
The large scale image was modified with the HighDynamicRangeWaveletsTransform process, and later with curves, to enhance the nebulosities and get uniform transitions over the Milky Way.
I forgot to menction that I worked with RGB images instead of the extracted luminance.
Now comes the tricky part of my procedure. I had opened at the same time the image before all of this wavelet processing, the small scale's image, and the large scale's image. Now, I selected the small scales as a mask for the original image, inverting (so, I'm protecting the stars). And using PixelMath I applied the following expresion: "small+1.1*large"
Why apply this procedure masking the original image with the small scales, instead of direcly adding both images and rescaling? Many times the resulting stars get an opaque look, so to prevent that, I use the original image as "background" information, and keep the brightest stars intact. But, becouse the stars are a bit smaller on the mask and the operating image (the small scale image), the result will show considerably smaller/narrower stars, with the same central peak. So, as result, we made a star shaping routine for free. Of course, the mask lets the large scales to past through it without interfering, so we end with consideradbly enchanced large scale features.
- New Curves adjustments
As before, the curves needs to be slighty adjusted, to set the brightness, contrast and saturation.
- Further star shaping
Even when we alreade have narrowes stars, they'r central profile kept beeing quite flat for the brightest stars. So, I had to modify the small scale's image. Using the HistogramTransform I deleted all the nonstellar features, and mostly of the small stars (also the faintest). Then, with the Closing Operator in the MorphologicalFilter, with a circular 19px kernel, I deleted all stars smaller than it. Becouse the result are flat circular stars, I had to delete the scales smaller than 8px with ATrousWavelets to obtain gaussian shaped stars in my mask.
Now, back with the original image, I used a small kernell (square 3x3 or circular 5x5), with 8 iterations and an amount of 10%, and the minimum operator. As result, the boundaries of the stars are softer, darker, and a more gaussian shaped profile is gained. A final touch to the Curves' c channel enhances the color saturation of the stars.
And that's all... in time of processing, and trial and error, I spend more than 3 days from the beginning.