Author Topic: Opinion and Suggestion: What PI Needs . . .  (Read 11273 times)

Offline vicent_peris

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 988
    • http://www.astrofoto.es/
Re: Opinion and Suggestion: What PI Needs . . .
« Reply #30 on: 2016 May 26 16:26:48 »
This has nothing to do with philosophy. I totally agree with Vicent, the most important point is developing an eye for the picture
and train the sense what is the next step and how could it be done. It could be more effective if we could concentrate more on it,
and less on how to get the tools working to the point we want them.

best regards, Tommy

Absolutely, yes. I have been able to design many of the tools in PixInsight because I always tried to be a step forward from the tools.

Moreover, your eye training will be much and much difficult when you add more and more automation to your workflow.


Best regards,
Vicent.
« Last Edit: 2016 May 26 16:32:55 by vicent_peris »

Offline Juan Conejero

  • PTeam Member
  • PixInsight Jedi Grand Master
  • ********
  • Posts: 7111
    • http://pixinsight.com/
Re: Opinion and Suggestion: What PI Needs . . .
« Reply #31 on: 2016 May 27 06:17:01 »
Quote
Drag/Drop of STF to TGVDenoise.

For what purpose? To define a local support, perhaps? Why do you think that STF, which has been designed *exclusively* to provide a thorough screen preview, in order to identify problems, can be necessarily valid as a local support map for a total generalized variation denoising algorithm? This just does not make sense. TGVDenoise's local support feature has a preview function that allows you to fine tune the support map, in case you want to do it visually.

Quote
Use "Detected Stars" from StarAlignment as Input for DynamicPSF

Why do you think that stars detected by StarAlignment, which is an image registration tool with very specific internal constraints, are necessarily valid for DynamicPSF, a PSF modeling tool, also with very specific requirements?

Now imagine that this is feasible right now (it isn't, actually, without further processing of the set of SA's detected stars). What happens if, say after next Summer, I decide to implement a significant improvement in SA's star selection routines that invalidates the "connection" between SA and DPSF? Should I remove that feature, in such case? Or should I implement a further, potentially complex, hence potentially buggy, preprocessing of SA's set of detected stars to preserve compatibility between both tools? One of the main design principles of PixInsight is modularity, a key concept in software development and systems design, which is behind most of PixInsight's robustness and efficiency.

While implementing an automatic star detection feature in DynamicPSF would be quite easy (and I'll probably write it at some point), this functionality is already implemented in several scripts, such as SubframeSelector and FWHMEccentricity. DynamicPSF, by itself, can be used to generate a highly accurate and robust PSF model in just a few minutes, by selecting a few good stars (say 10 - 30, depending on the image) manually.

Quote
TGVDenoise Edge Protection from background's avgdev

If it only were so easy. While that may work in some specific cases, it simply does not work consistently. Edge protection is a critical parameter that must be fine tuned manually. Noise reduction (which is *not* noise suppression, a serious conceptual error that we see so frequently) is a highly subjective task. What you understand as a good noise reduction result may easily be seen as insufficient or excessive by another person. Statistical dispersion does not work consistently in this case, even as a rough approximation.

Quote
- "gluing" of robust single processes to "actions" with a much reduced parameter set. Examples are already available in PixInsight's script section.

PixInsight is actually a development platform, not just an application. The concept of "action", as you are referring to it here, does not exist in PixInsight, simply because it does not make any sense. There is no gluing of recorded commands, as happens in other applications. Real image processing is much more sophisticated than that. If you analyze most of the scripts pertaining to the official distribution, you'll see that they implement genuinely new functionality and new algorithmic solutions, in many cases using existing tools as building blocks of more complex procedures, and in other cases starting from completely different paradigms. This has nothing to do with glued actions. Of course there are also pure automation scripts, BatchPreprocessing being the most notable instance (although BPP does more than simple automation, really), and there are a few relatively trivial scripts which have been included more as programming exercises than as really useful tools. There are also a few abandonware products that will probably be removed in the next version.

Quote
line drawing (for satellite removal)

Agreed. This is actually in the to-do list, along with a few other painting tools such as PaintBrush (similar to CloneStamp), although with very low priority.

However, to remove satellites, cosmics and other spurious data, we have powerful pixel rejection algorithms (and there will be more) implemented in the ImageIntegration tool. Using a manual line drawing tool for this task sounds a bit strange. That may be necessary in other applications, but definitely not here.

Quote
reducing strength of last operation comfortably with a slider.

This is a painting/retouching concept that does not make any sense in an image processing environment. "Reducing the strength" of an applied algorithm, unless the algorithm in question has some kind of strength parameter, does not lead to anything justifiable from an algorithmic point of view. If you want to merge two images, simply write "(a+b)/2" in PixelMath, or more generally, "(p*a + q*b)/(p+q)". But then call this "averaging two images", which is what it actually does, because by doing this you are not reducing the strength of anything. Admittedly, PixelMath should have real-time preview functionality, which was not realistic ten years ago, but is perfectly doable with current hardware. This is also in the to-do list with moderately high priority.

Quote
Anyone remember the time before Kai Wiechen introduced the BatchPreprocessingScript? I do.

Kai's contribution was unquestionably excellent. Personally, with the exception of simple tasks sometimes, I tend to implement preprocessing using the ImageCalibration, ImageIntegration and StarAlignment processes step by step. Once you know well how to calibrate images, this is actually not slower and provides much more control. The integrated result of BPP is *useless*. It is just a quick preview. Image integration, which is the time consuming trial/error part, cannot be done with BPP.

Don't get me wrong. This is not to neglect the need for automation. Quite at the contrary, there are many automated processes in PixInsight. Probably more than in most competing applications, not to mention our advanced scripting capabilities. For example, most of the image preprocessing tools are already automating very complex and large procedures. Consider the capabilities of Blink or SubframeSelector, just to name two well-known examples. There are many commercial applications out there, in some cases more expensive than a PixInsight license, that are simpler than one of these tools.

Quote
Make Stars Smaller
Less Crunchy More Fuzzy
Deep Space Noise Reduction
Fade Sharpen to Mostly Lighten
Increase Star Color
Lighten Only DSO and Dimmer Stars
Enhance DSO and Reduce Stars
Star Diffraction Spikes

... just to name a few "interesting" actions. All of them are nice painting/retouching concepts. Do you dare to compare PixInsight to this stuff? Really? To comment just on a few of them:

"Make Stars Smaller". While reducing the significance of stars artificially can be desirable in some special cases, particularly in some wide-field images, considering this task as something justifiable on a regular basis is a conceptual error. If your stars are bloated, they surely are bloated for some reason. Excluding instrumental and acquisition issues, which should be addressed at the hardware level, we have specific tools, such as AdaptiveStretch, HDRMultiscaleTransform and others, to prevent these problems by applying correct procedures to delinearize the data and solve high dynamic range problems. In any case, artificial star reduction should always be applied with great care using carefully built star masks, and this involves trial/error work.

"Increase Star Color". If your data set lacks chrominance, stars and other small-scale structures may lack color, especially dim ones. In such case, please acquire more color data, or live with your stars as they are if you cannot. It could be also that you have implemented some processing steps, particularly those involving local contrast enhancement, without protecting small structures adequately. In such case, rewind your processing back to the necessary point, analyze the problem, and restart. Anything else is just cheating.

"Enhance DSO and Reduce Stars". Two conceptual errors for the price of one.

"Star Diffraction Spikes". Whoa! Happy painting!

Quote
There is a subforum here titled "Wish list ... post them here and discuss them with us". Yesterday I quickly walked through all the pages of this forum and tried to count the number of threads with "0 replies". I found so many, many of them.

Admittedly, the "post them here and discuss them with us" part may create too much expectations. I'll try to replace it with something more realistic.

Sorry if I cannot fulfill your expectations. I simply cannot do more. I am the only man who writes and maintains all of the code for the core application, all of the code for a 99% of the tools, a 100% of the development frameworks, who does all of the necessary work to keep the licensing system and the website running and, with the help of a few friends, performs most of the administrative and user support tasks. Besides that, I try to have a relatively normal personal life. No, this is not XXX Incorporated, YYY Corporation, ZZZ International, or anything similar. This is essentially the work of one person. And I am 52, with too many years spent writing code 12-16 hours a day and lots of nights without sleeping. You know, I feel that I simply cannot work nearly as hard as just five years ago. Nothing has been or is easy for me, very especially within the astrophotography community. With the exception of the users who have bought a license, some good friends and a bunch of people who have written new tools, scripts and tutorials, I owe nothing to anybody. I would like to be able to write more documentation (even if an overwhelming majority of users never reads it), more development documentation (very necessary in the medium-long term - I am trying to work on this now), to implement GPU acceleration, zoomable/scrollable fast real-time previews for all tools, and so many fancy things. Unfortunately, I have to invest my time and resources to pursue smaller goals with much less wow effect. And one of these goals is an extremely nice INDIClient project started by a good developer, who is working on it for free in his spare time, including weekends. For sure I am going to help and support him.
Juan Conejero
PixInsight Development Team
http://pixinsight.com/

Offline STEVE333

  • PixInsight Addict
  • ***
  • Posts: 231
    • sk-images
Re: Opinion and Suggestion: What PI Needs . . .
« Reply #32 on: 2016 May 27 10:41:14 »
Juan - Thank you for developing and supporting this wonderful tool. 

Everyone is free to wish for what they want, but, often times we don't realize what is intailed in fulfilling those wishes.  Your work load sound overwhelming to me.  I'm just glad that you have provided us with the best tool available for AP image processing.

All the best - a fairly new PI user.

Steve King
Retired Physicist/Engineer (74)
Telescopes:  WO Star71 ii, ES ED102 CF
Camera:  Canon T3 (modified)
Filters:  IDAS LPS-D1, Triad Tri-Band, STC Duo-Narrowband
Mount:  CEM40 EC
Software:  BYEOS, PHD2, PixInsight

http://www.SteveKing.Pictures/

Offline tommy_nawratil

  • Member
  • *
  • Posts: 53
Re: Opinion and Suggestion: What PI Needs . . .
« Reply #33 on: 2016 May 27 10:58:49 »
hi Juan,

wow, didn't know, expected PI being a team of some persons contributing fulltime.
That being told, greatest respect and congrats what you have achieved with PixInsight and the platform!

A developer must follow his vision, ideas and general outlines, and be careful with use of his capacity,
and simply cannot easily follow the manyfold wishes he gets told from users. I work in a small company
that also developes small devices, I can appreciate what you are doing here already.

I own an Astrel standalone CCD that can use the INDI library to control mount etc, but I don't use that.
Like even to find objects by finderscope sometimes, it's my joy to have hands on everything manually not automatically.
So, this camera enables me to go without Laptop if I want it.
But I know someone who has put these INDI K-stars/Ekos things into a Banana Pi and controls mount, cam, offline platesolve even focus via it.
There are 1000 ways to the stars.

Eager to see what emerges!

best, Tommy
« Last Edit: 2016 May 27 13:53:49 by tommy_nawratil »

Offline mmirot

  • PixInsight Padawan
  • ****
  • Posts: 881
Re: Opinion and Suggestion: What PI Needs . . .
« Reply #34 on: 2016 May 28 09:48:11 »
This has nothing to do with philosophy. I totally agree with Vicent, the most important point is developing an eye for the picture
and train the sense what is the next step and how could it be done. It could be more effective if we could concentrate more on it,
and less on how to get the tools working to the point we want them.

best regards, Tommy

Absolutely, yes. I have been able to design many of the tools in PixInsight because I always tried to be a step forward from the tools.

Moreover, your eye training will be much and much difficult when you add more and more automation to your workflow.


Best regards,
Vicent.

Absolutely!

No short cuts on this.

This thread is taking a bad direction.  We should dispel with the notion that a good image is just a few clicks away.

The whole platform is built for productivity.  It may not be clear to a beginner.
There are some improvements that could increase productively even more.  More RT previews as Juan suggests may be one.
(For my taste it really needs a zoom a long with it.  Otherwise, I will still be applying a setting to a preview. :tongue: )
These things have their place but also theier own limitations. Making things a bit easier to do does not have to sacrifice the interactive nature of the process. These goals are not mutually exclusive.

I hate that we keep bringing up BPP automation as an example. It is a good starting point and saves me time. I don't stop with the output as a finial preprocessed image.  I also, check my calibration frames to if it worked properly.

This tool could use a bit more flexibility in the final stages especially. For example the initial BPP integration output could be feed to the integration module as starting point.
In a perfect world we could then define a preview from the output and interactively review multiple integration perimeters and choose the best settings.
That said it , I suspect this would be a mighty costly programing effort in time to save a few seconds of my time.

The most dangerous thing for development can be a good idea.
Probably too many good ones floating around.  What I just said is just my idea, it may or may not be a good one.  My mother used to say "if wishes were horses we would all take a ride".

I would like to see a few tweaks to existing tools now and again.  One example is some MLT and MMT sliders don't have fine numeric control.
At the same time, these are not essential improvements and it take time to change with many competing developments of higher priority.



Max