Author Topic: M101 HDR processing: StarTools vs PixInsight.  (Read 21628 times)

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: M101 HDR processing: SkyTools vs PixInsight.
« Reply #15 on: 2012 July 12 09:29:37 »
Quote
I would be very interested to learn more about that if you would like to explain what you mentioned above.

So here we go. The first icon is 'initial_crop_and_stretch'. This is a ProcessContainer with four processes:

- ImageIdentifier. This changes the image's name to 'M101', just to ease identification during the rest of the process.

- DynamicCrop. To crop black/noisy borders that might generate edge artifacts later. Now I see that I cropped the image too much at the top edge; as I said this was just a quick and (rather)dirty test.

- SampleFormatConversion. To convert the original 16-bit image to 32-bit floating point format. This is always advisable when one has to apply strong histogram transformations and other heavy tortures to the data.

- HistogramTransformation. This is the initial nonlinear stretch, which is necessary to compress the dynamic range of the image (mainly because the HDRMultiscaleTransform tool does not work with linear data). I did nothing fancy here: just STF AutoStretch applied through the HistogramTransformation tool.

The second icon is a RangeSelection instance, which is used here to generate a simple mask that will allow us to recover saturated stars at the end of the process. Some of the stars in the image are completely saturated (obviously the brightest ones). This includes also the core of the galaxy nucleus, which we'll isolate as a 'star' thanks to HDR compression.

Saturated stars are very problematic because they are treated as high-dynamic range objects by the HDR wavelet transform algorithm, which tries to find a valid solution for them, as for the rest of the image. Obviously, there is no solution at all for a saturated star, and hence the resulting 'solution' is just an artifact. Non-saturated stars pose no problem, in general, with the HDRWT algorithm. In the case of this image we have an additional problem: some of the saturated stars are concave. This is a 3-D representation of one of the saturated stars in the original image (the bright one at 11 o'clock from the nucleus), generated with the 3DPlot script in PixInsight:


Note that the saturated region doesn't have a flat profile, but a concave shape where the maximum brightness values are located on a ring around the center. The result of this object after nonlinear stretching and HDR compression with the HDRWT algorithm is as follows:


As you can see, HDRWT has found a very good solution to this HDR problem. The result is not very appealing though, as it would be a better representation of a black hole ... :)

The best way to fix these problems is simple: don't let star saturation happen to your image. This can be achieved with short exposures integrated with the HDRComposition tool. The second (and only!) best solution is protecting saturated stars during HDR compression. This can be done with a star mask (StarMask tool). In this case, however, I have opted for a simpler and faster solution: isolate all saturated pixels by thresholding them, and use the resulting mask to repair the damaged stars after HDR processing.

RangeSelection is the tool of choice in these cases. I have set the lower limit parameter to 0.99 and upper limit to its default 1.0 value. This effectively isolates all saturated regions. I have also set smoothness to 2.0 in order to soften mask edges, which is necessary to prevent hard transitions when we use the mask.

The third icon is where the fun starts. It is a ProcessContainer instance with three HDRMultiscaleTransform instances and one CurvesTransformation instance. Instead of compressing the dynamic range in a single step, I have compressed it in three successive steps working at different dimensional scales. The three instances of HDRMT work with 8, 6 and 4 wavelet layers, respectively. To understand how these steps transform the image, take a look at the 8th, 6th and 4th layers of the wavelet decomposition of the stretched image (you can get these images with the ExtractWaveletLayers script):




The 8th layer supports large scale image structures, especially the spiral arms and the dark gaps between them. By compressing the dynamic range of the image up to 8 layers, we can preserve the representations of these structures in the final image. Without doing this (i.e., by applying HDRMT just to small-scale structures), the resulting image would be too 'flat', with small objects well represented but lacking an appropriate representation of the main subject of the image, which is the whole M101 galaxy.

The 6th layer supports what we can describe as medium-scale structures. It represents the largest structures within the spiral arms, such as HII regions and stellar clusters, and the nucleus of the galaxy. We also want to represent these structures appropriately in the final image, since they transport an essential part of the information that we want to communicate.

Finally, the 4th layer allows us to represent and enhance small-scale structures, as high-contrast edges and substructures within HII regions and the spiral arms.

The divide and conquer paradigm is a basic algorithm design tool that gives raise to powerful data analysis and processing algorithms and techniques. By separating HDR compression into three steps as I have described above, we can solve the dynamic range problem posed by this image thoroughly, while the object is represented correctly in a wide range of dimensional scales.

Other details of the HDRMT instances applied include:

- Two iterations of HDRMT (see the iterations parameter) have been applied with 8 and 6 wavelet layers. This has been done simply to increase the strength of the HDR compression operation. This is of course optional; one decides the amount of compression based on one's own experience and taste.

- HDRMT has been applied with the lightness mask option enabled. This applies dynamic range compression just where it is actually needed: on the brightest areas of the image. This option protects the background from excessive noise amplification and prevents most small-scale ringing problems.

- Large-scale deringing has been applied working at 8 and 6 layers. When multiscale algorithms are applied at large dimensional scales, the risk of generating large-scale ringing problems must always be taken into account. Small-scale ringing artifacts are self-evident and easy to detect in general. For example, a black ring around a star is difficult to overlook. However, large-scale ringing problems can be difficult to discover, basically because they are mixed---sometimes in quite subtle ways---with true image structures. Ringing can be judged from a detailed comparison between the original and final images. You can also subtract them (or compute the absolute value of the difference) to perform a more quantitative analysis. IMO, in this example we have not introduced any significant ringing in the image, or in other words, we have not generated any ringing that can give raise to false structures.

To evaluate the effects of HDR compression and ringing problems more objectively, let's compare the relevant scales of the original and processed images:

8th wavelet layer (scale of 256 pixels), original/processed:


6th wavelet layer (scale of 64 pixels), original/processed:


4th wavelet layer (scale of 16 pixels), original/processed:


As you can see, the large-scale contents of the image have not changed a lot. Basically, only the gaps between spiral arms have been slightly darkened, which increases contrast of these structures. The core of the galaxy has also been reduced in brightness, since here is where more dynamic range compression has been applied (actually, the core is the only structure that poses a true HDR problem in this image). We have applied HDRMT with 8 layers mainly to protect these large scale structures, in order to avoid a flat result.

The medium-scale structures (6th layer) have changed more, and the small-scale structures (4th layer) have been significantly enhanced, as you can see in the comparisons above.

The final process in the 'HDR' ProcessContainer is a CurvesTransformation instance. This is essentially an S-shaped contrast enhancement curve, with additional control points to protect the background and enhance the midtones. This curve is an important step that determines the general shape of our final processed image. It is perhaps the most creative component of the whole workflow, but also where we need more experience to judge what is an optimal result in terms of the equilibrium among the different image structures.

The 'CLAHE' icon is a ProcessContainer with a single LocalHistogramEqualization instance, which has been applied masked with a duplicate of the image. The LocalHistogramEqualization tool has been written by Czech developer and PixInsight user Zbynek Vrastil, and is an excellent implementation of the Contrast-Limited Adaptive Histogram Equalization algorithm (CLAHE). This tool is really excellent to control the local contrast of medium- and small-scale image structures. I have applied it masked to avoid noise intensification on low-SNR areas, such as the sky background and other dark structures. The result of this tool has been outstanding for this image IMO.

The 'star_repair' icon transports a PixelMath instance. This instance applies the following PixelMath expression:

range_mask + $T*(1 - range_mask)

where range_mask is the thresholded mask that we generated after the initial nonlinear stretch, with the 'star_repair_mask' RangeSelection icon. The above expression replaces all saturated regions (saturated stars and the brightest part of the galaxy nucleus) with a Gaussian profile whose intensity attempts to be proportional to the original brightness of each (unsaturated) structure. We know that we are working with Gaussian profiles because we convolved (smoothed) the mask with the RangeSelection tool (we applied a smoothness parameter of 2 sigma). The operation applied is just the standard masking operation:

x' = m*f(x) + (1 - m)*x

where x is the input image, m is the mask, f() is any transformation, and x' is the output image. This operation performs a proportional mix of the original and processed images: where the mask is white (m=1), the result is f(x); where the mask is black (m=0), the result is the original image x. Where the mask has any intermediate value 0 < m < 1, the result is a proportional mix of the transformed and input images. In our case, f(x) is the identity (f(x)=1) so what we get is the RangeSelection mask itself.

Finally, the 'final_stretch' icon applies a HistogramTransformation instance to cut a small part of the unused dynamic range at the shadows. I haven't cut it completely because too dark a background makes it more difficult to see dim objects, such as distant galaxies and background nebulae. I usually leave the mean background with a value around 0.1.

Hope this helps.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline Ignacio

  • PixInsight Old Hand
  • ****
  • Posts: 375
    • PampaSkies
Re: M101 HDR processing: StarTools vs PixInsight.
« Reply #16 on: 2012 July 12 09:56:22 »
These discussions are always interesting, maybe because they are somewhat subjective.

To me, comparisons aside, the end result and ones enjoyment getting there is what counts. A priori, I would not self-impose a strict ethics code on correcting hardware problems via software, as most of the times is not about good will, but about $$$$. And even deconvolution could be viewed as correcting distortions introduced by the optical train, albeit globally applied.

For example, I find quite useful a filter in Photoshop called "Lens Correction", when dealing with chromatic aberrations and coma introduced by telephoto lenses when operating at low f ratios.

cheers
Ignacio
« Last Edit: 2012 July 12 10:32:28 by idiaz »

Offline GrahamJC

  • Newcomer
  • Posts: 5
Re: M101 HDR processing: StarTools vs PixInsight.
« Reply #17 on: 2012 July 13 02:18:42 »
Juan - thanks for the very thorough expanation of the steps used in processing this image. For those of us starting our processing journey this sort of information is (in my view) invaluable. It is al very well seeing the steps somebody has used but if you do not understand why they were used you cannot apply them intelligently.

Thanks again.

Graham

Offline IvoJager

  • Newcomer
  • Posts: 2
Re: M101 HDR processing: StarTools vs PixInsight.
« Reply #18 on: 2012 July 19 04:13:54 »
Hi all,

I just came across this thread and, as the creator of StarTools, I thought I'd say a few words.
First and foremost I apologise if the comparison between the two programs offended anyone. This was not my intention.
PixInsight is arguably the most comprehensive astrophotography processing suite currently available and, as such, sets the benchmark for this type of software. The comparison should be seen in that light and nothing more. The particular image was chosen as it is the only image in the public domain (thank you Jim Misti!) that I could find that was 'rubber stamped' as 'PixInsight approved'. The comparison was not about bringing the data to the best standard possible, rather it was about showing how to operate the software to get to a comparable standard in comparably less time.

PI and ST are two fundamentally different beasts, coming from diametrically opposed angles at every single step of the way - it appears we have very little to fear from each other.

StarTools is a labour of love and a not-for-profit endeavour on our behalf. Its sole purpose is to enthuse more people for our hobby/profession, to get more people interested in the (deep) night sky, to encourage people to take their  compact camera, webcam, DSLR or CCD (fancy!  :P) and image the heavens. We're not in this to compete or to get rich, we're simply passionate about image processing and education. We hope the price point further reflects this. We charge a nominal fee to those who can bear it to keep hosting going and visit an imaging conference here and there, but that's it.
 
Determined not to sink too much money into the hobby, the project started out as a loose collection of command-line tools for personal use. I was stuck in the centre of light polluted Melbourne AU, with eyepieces that exhibited more chromatic aberration than quality, with a 15 year old 8" Dob, with a 6 year old compact camera and a $7 dollar webcam. Looking around for pre-existing software that would help me get the most out of my 'gear', I found nothing that addressed my needs or met my budget. I dealt with this the only way I knew how; writing software to, somehow, fix my woes.

Back then, while evaluating PixInsight, it was quite clear to me that we had a fundamentally different philosophies when it comes to software, user interfaces and, indeed, the application of algorithms and processing techniques.
As reiterated in this thread by the PixInsight Team, approximations and/or data-driven retouching are not within the realm of the PI team's consideration. This is a pity as this 'no-retouching' stance precludes visualising, otherwise hard to get at object like, for example, the Carina dwarf galaxy. More importantly, this stance effectively shuts out a large number of less well equipped, lest tech-savvy or less-affluent users. How do I know this is the case? I was one of them! But instead of putting our wonderful hobby/profession in the 'too hard, too expensive' basket and give up, I did something about it.

I must vehemently disagree with assertions in this thread that ST's modules are somehow black boxes or operate under a 'trust-me-it's-magic' policy. I'm sure we all read the same papers, visit the same conferences and seminars and lie awake at night going over very similar code, having the same sort of eureka moments under the shower. :) And indeed, a lot of modules in ST are based on enhanced versions of the same algorithms that power PixInsight or PhotoShop. However ST's fundamental philosophy is one of 'meta previews', showing and controlling a chain of algorithms (akin to controlling a script or using a layer stack in Photoshop) specific to a higher level concept, as opposed to performing operations one-by-one. Knowing in what context and order each individual algorithm is used, one can dispense with useless algorithm-operations, parameters, settings (for example parameters that would clip the signal) or useless/harmful combinations thereof. This makes for a cleaner and beginner-friendlier UI and gets more useful results, quicker. Again, it's a fundamental difference in philosophies; on the one hand we have an object oriented UI (PI) with everything-and-the-kitchen-sink under your finger tips, on the other a task-oriented UI (ST) with just the stuff you need for the task at hand.

Another key difference, it appears, is how algorithms are applied to the data. It appears PI is strongly focused on facilitating the operation (in an input->output fashion), where the user specifies each and every parameter, much like a mathematical formula. The ST approach is much more about facilitating the expression of a desired outcome, i.e. letting an algorithm (or stack of algorithms) converge on an acceptable solution, based on more abstract high-level settings, parameters and (sometimes conflicting!) user wishes. It is perhaps here where the 'magic box' misunderstanding comes from. If automatically assuming that 'signal clipping is bad' or that 'white stars should remain white' is magic, then call us The Black-Box Wizards if you must!

There is much I could say about the merits of the repro Juan did, but I won't. I will just say that, in my opinion, multi-scale processing techniques are blunt instruments that need to be wielded very carefully and don't lend themselves well to carefully formulated questions/tasks and straight-to-the-optimum problem solving, though inter-scale awareness can help somewhat. One thing is for sure; they are still useful and a hell of a lot of fun to experiment with in a 'season-to-taste' sort of fashion!

The perceived large scale 'ringing' in the ST image, is no such thing however. The Dynamic range optimization in StarTools is based on a Retinex & Local Histogram Stretching hybrid algorithm (gen 5 is entering beta soon), which virtually does not ring, especially not at a large scale. In the interest of local contrast, it is perfectly acceptable to dip below mean sky background - it wouldn't be making full use of the dynamic range at its disposal at that place in the image if it weren't! All as long as it demonstrates real detail which it does; the 'void' is a real feature in the disc and is readily visible in any image of M101. Of course, if such an aggressive approach is not desired it can be dialled back at will. The user is still in full control at all times, always.

This has become a bit of an essay, but at the end of the day, my goal is to enthuse more people about imaging the skies. PixInsight played and continues to play a huge role in this, and for that I have to sincerely commend the whole team without reservation.

Clears skies!

Ivo Jager

Offline cs_pixinsight

  • PixInsight Addict
  • ***
  • Posts: 156
Re: M101 HDR processing: StarTools vs PixInsight.
« Reply #19 on: 2012 July 19 11:42:47 »
Carlos/Juan, thank you for the responses.  Being primarily a terrestrial photographer my equipment was purchased with this purpose in mind.  Over the years, by interest has expanded to astrophotography and my equipment has been pushed into service in an area it was not optimized for.  Astrophotography is ruthless in showing optical flaws in any equipment and alas mine is currently showing some coma in < 50mm images.  I agree, fixing the problem at the hardware level is ultimately the best solution, but I don't currently have the funds to purchase DSLR lenses that don't have coma for the wide field images I prefer and like to image.

I understand and respect your philosophy regarding the processing techniques you put into PI, but I'm not a scientist and care about the aesthetic part of the hobby much more than keeping the data scientifically accurate.  Having ways to fix common/less expensive hardware problems for those of us that can't afford the best equipment is a godsend and also opens up the hobby to a much larger audience because existing hardware can be repurposed.

Carlos, I look forward to seeing the new deconvolution routines and hope it will help in regards to less than 100% perfect tracking.  I didn't mention it in my original post, but lens distortion correction would help in wide field mosaic creation too.  The coma and distortion correction being the two most difficult aspects to work around in mosaic generation with wide angle lens frames.

Ivo, thank you for explaining your point of view in creating your software.  Indeed, PI and ST are "diametrically opposed", but they are still equally valid in helping astrophotographers produce their final images.  Opening up this detail oriented hobby to a wider audience is always welcomed and it helps all of us as the user base approaches a critical mass.  More users = more equipment = more software = cheaper solutions.

Thank you all,
Craig

Offline Geoff

  • PixInsight Padawan
  • ****
  • Posts: 908
Re: M101 HDR processing: StarTools vs PixInsight.
« Reply #20 on: 2012 July 19 17:48:08 »
Another key difference, it appears, is how algorithms are applied to the data. It appears PI is strongly focused on facilitating the operation (in an input->output fashion), where the user specifies each and every parameter, much like a mathematical formula.
Well not necessarily.  The user is free to specify each and every parameter, but usually the default settings will produce a better result that the other commonly used software package. Presumably there are people who are happy to let the software do the work without too much user input. Tweaking the parameters enables you to go go several steps better.
Geoff
Don't panic! (Douglas Adams)
Astrobin page at http://www.astrobin.com/users/Geoff/
Webpage (under construction) http://geoffsastro.smugmug.com/

Offline IvoJager

  • Newcomer
  • Posts: 2
Re: M101 HDR processing: StarTools vs PixInsight.
« Reply #21 on: 2012 July 19 19:34:36 »
Another key difference, it appears, is how algorithms are applied to the data. It appears PI is strongly focused on facilitating the operation (in an input->output fashion), where the user specifies each and every parameter, much like a mathematical formula.
Well not necessarily.  The user is free to specify each and every parameter, but usually the default settings will produce a better result that the other commonly used software package. Presumably there are people who are happy to let the software do the work without too much user input. Tweaking the parameters enables you to go go several steps better.
Geoff
Hi Geoff,

Thanks for your reply. I think you misunderstood the point I was trying to make. It is not that PI (or other programs for that matter) does not offer useful presets. It would be very inconvenient indeed if it didn't. The difference I was trying to point out, is that the functionality is more granular in PI, requiring the user to chain multiple operations manually. Doing so, the functionality (because it HAS to be generic) has to offer the user the full array of settings and options, even when it is not appropriate in the context of the chain of operations performed to get to a desired outcome. As such there is much scope to create undesirable results or get a sequence wrong (for example decon after stretching).
It goes further than that however; the results of other setting further down the chain are often dependent on the results of settings further up te chain. A small change in a setting high up the chain can have big (potentially detrimental) consequences later on. This has a big impact on predictability and repeatability  of results. Back propagation in such a scenario is impossible as the algorithms used are not aware of each other or their place in the sequence. For example, an algorithm that came first in the chain can't retrospectively be told to back off a little by the second algorithm, because the second algorithm in the chain finds it is clipping the signal as a result (using the first algorithms's input). The only way to correct the situation in traditional software is to hit the undo button and start again. Even software that allows you to script multiple operations has no way to let algorithms communicate back up to the chain, because they are not aware of which algorithm is preceding which, or how to communicate with their parent algorithm.
As a human, being in control of all parameters at at all times is simply not productive (or even desirable) any longer when it comes to today's complex algorithms or chains of algorihms. This isn't the 90s anymore. We've moved beyond marveling at Unsharp Mask or simple multi-scale processing. Today's complex algorithms and possibilities require a level of abstraction away from controlling each and every parameter. Instead we have to  guide the computer by nudging it in the right direction using meta parameters until the desired outcome is accomplished. The benefits are many; it's faster, more targeted, protects against 'overcooking', accidental mistakes that only become apparent later on and allows much tighter control of noise propagation.

Cheers,

Ivo

Offline marekc

  • PixInsight Addict
  • ***
  • Posts: 177
Re: M101 HDR processing: StarTools vs PixInsight.
« Reply #22 on: 2012 July 20 08:53:39 »
I found Ivo's comments about algorithms communicating (or not communicating) with each other up and down the `chain' to be quite interesting. Perhaps I'm going rather far afield here, but Ivo reminded me of an idea that's been bouncing around my head for a little while now. Bear in mind, though, that since I'm not a programmer, I'm likely to come up with some rather crazy and impractical ideas...

I've sometimes dreamt of a software system that could handle the addition of more integration time without making the user re-do the processing steps. Here's how things normally work: We go out, and we shoot as much data as the weather, our schedules, and the Moon will allow. Then we take all of those lights and flats, along with our library of darks and biases, and we make an image. We have to make many, many decisions and perform many experiments, but we eventually get an image that we're more-or-less pleased with. In the back of our minds, however, we're always thinking about how much better the image would be if we could have acquired *more* hours of integration time.

Let's say the opportunity arises to do just that. We're able to go out and shoot more data. That's great, but now we have to re-do our processing. I realize this is almost certainly a `pipe dream', but sometimes I think ``wouldn't it be nice if I could just take the new lights and flats, and tell the computer to simply add them to what I've already done, and watch the image improve from the addition of the new data?''

I realize that's impractical, and it's a bit of a sin against processing - which is something we all enjoy for its own sake, I admit - but I guess I just get a bit tired sometimes. I sometimes find myself kind of exhausted after tweaking and iterating settings all day and night, and this fatigue acts as a bit of a *dis-*incentive against acquiring more data on my target, since the new data would mean starting all over again, at least to a large extent.

That's where a system that was a bit more `automated' or `black box' might come in handy, if such a system allowed me to dump additional new data into it, without having to start over again. I'm not saying that's the *only* way I'd want to process - don't get me wrong, I love PI, and it's a lot of fun - but it might make an interesting option.

- Marek

Offline astrodoc71

  • PixInsight Enthusiast
  • **
  • Posts: 93
Re: M101 HDR processing: StarTools vs PixInsight.
« Reply #23 on: 2012 August 02 02:30:57 »
Very interesting discussion!  I think when you start trying to correct your tracking errors with processing software it's time for you to bag it, sell your rig and start downloading Hubble images to process those! I'm new here but looking forward to this learning process.