Drag/Drop of STF to TGVDenoise.
For what purpose? To define a local support, perhaps? Why do you think that STF, which has been designed *exclusively* to provide a thorough screen preview, in order to identify problems, can be necessarily valid as a local support map for a total generalized variation denoising algorithm? This just does not make sense. TGVDenoise's local support feature has a preview function that allows you to fine tune the support map, in case you want to do it visually.
Use "Detected Stars" from StarAlignment as Input for DynamicPSF
Why do you think that stars detected by StarAlignment, which is an image registration tool with very specific internal constraints, are necessarily valid for DynamicPSF, a PSF modeling tool, also with very specific requirements?
Now imagine that this is feasible right now (it isn't, actually, without further processing of the set of SA's detected stars). What happens if, say after next Summer, I decide to implement a significant improvement in SA's star selection routines that invalidates the "connection" between SA and DPSF? Should I remove that feature, in such case? Or should I implement a further, potentially complex, hence potentially buggy, preprocessing of SA's set of detected stars to preserve compatibility between both tools? One of the main design principles of PixInsight is
modularity, a key concept in software development and systems design, which is behind most of PixInsight's robustness and efficiency.
While implementing an automatic star detection feature in DynamicPSF would be quite easy (and I'll probably write it at some point), this functionality is already implemented in several scripts, such as SubframeSelector and FWHMEccentricity. DynamicPSF, by itself, can be used to generate a highly accurate and robust PSF model in just a few minutes, by selecting a few good stars (say 10 - 30, depending on the image) manually.
TGVDenoise Edge Protection from background's avgdev
If it only were so easy. While that
may work in some specific cases, it simply does not work consistently. Edge protection is a critical parameter that must be fine tuned manually. Noise reduction (which is *not*
noise suppression, a serious conceptual error that we see so frequently) is a highly subjective task. What you understand as a good noise reduction result may easily be seen as insufficient or excessive by another person. Statistical dispersion does not work consistently in this case, even as a rough approximation.
- "gluing" of robust single processes to "actions" with a much reduced parameter set. Examples are already available in PixInsight's script section.
PixInsight is actually a development platform, not just an application. The concept of "action", as you are referring to it here, does not exist in PixInsight, simply because it does not make any sense. There is no gluing of recorded commands, as happens in other applications. Real image processing is much more sophisticated than that. If you analyze most of the scripts pertaining to the official distribution, you'll see that they implement genuinely new functionality and new algorithmic solutions, in many cases using existing tools as building blocks of more complex procedures, and in other cases starting from completely different paradigms. This has nothing to do with glued actions. Of course there are also pure automation scripts, BatchPreprocessing being the most notable instance (although BPP does more than simple automation, really), and there are a few relatively trivial scripts which have been included more as programming exercises than as really useful tools. There are also a few abandonware products that will probably be removed in the next version.
line drawing (for satellite removal)
Agreed. This is actually in the to-do list, along with a few other painting tools such as PaintBrush (similar to CloneStamp), although with very low priority.
However, to remove satellites, cosmics and other spurious data, we have powerful pixel rejection algorithms (and there will be more) implemented in the ImageIntegration tool. Using a manual line drawing tool for this task sounds a bit strange. That may be necessary in other applications, but definitely not here.
reducing strength of last operation comfortably with a slider.
This is a painting/retouching concept that does not make any sense in an image processing environment. "Reducing the strength" of an applied algorithm, unless the algorithm in question has some kind of strength parameter, does not lead to anything justifiable from an algorithmic point of view. If you want to merge two images, simply write "(a+b)/2" in PixelMath, or more generally, "(p*a + q*b)/(p+q)". But then call this "averaging two images", which is what it actually does, because by doing this you are not reducing the strength of anything. Admittedly, PixelMath should have real-time preview functionality, which was not realistic ten years ago, but is perfectly doable with current hardware. This is also in the to-do list with moderately high priority.
Anyone remember the time before Kai Wiechen introduced the BatchPreprocessingScript? I do.
Kai's contribution was unquestionably excellent. Personally, with the exception of simple tasks sometimes, I tend to implement preprocessing using the ImageCalibration, ImageIntegration and StarAlignment processes step by step. Once you know well how to calibrate images, this is actually not slower and provides much more control. The integrated result of BPP is *useless*. It is just a quick preview. Image integration, which is the time consuming trial/error part, cannot be done with BPP.
Don't get me wrong. This is not to neglect the need for automation. Quite at the contrary, there are many automated processes in PixInsight. Probably more than in most competing applications, not to mention our advanced scripting capabilities. For example, most of the image preprocessing tools are already automating very complex and large procedures. Consider the capabilities of Blink or SubframeSelector, just to name two well-known examples. There are many commercial applications out there, in some cases more expensive than a PixInsight license, that are simpler than one of these tools.
Make Stars Smaller
Less Crunchy More Fuzzy
Deep Space Noise Reduction
Fade Sharpen to Mostly Lighten
Increase Star Color
Lighten Only DSO and Dimmer Stars
Enhance DSO and Reduce Stars
Star Diffraction Spikes
... just to name a few "interesting" actions. All of them are nice painting/retouching concepts. Do you dare to compare PixInsight to this stuff? Really? To comment just on a few of them:
"Make Stars Smaller". While reducing the significance of stars artificially can be desirable in some special cases, particularly in some wide-field images, considering this task as something justifiable on a regular basis is a conceptual error. If your stars are bloated, they surely are bloated for some reason. Excluding instrumental and acquisition issues, which should be addressed at the hardware level, we have specific tools, such as AdaptiveStretch, HDRMultiscaleTransform and others, to prevent these problems by applying correct procedures to delinearize the data and solve high dynamic range problems. In any case, artificial star reduction should always be applied with great care using carefully built star masks, and this involves trial/error work.
"Increase Star Color". If your data set lacks chrominance, stars and other small-scale structures may lack color, especially dim ones. In such case, please acquire more color data, or live with your stars as they are if you cannot. It could be also that you have implemented some processing steps, particularly those involving local contrast enhancement, without protecting small structures adequately. In such case, rewind your processing back to the necessary point, analyze the problem, and restart. Anything else is just cheating.
"Enhance DSO and Reduce Stars". Two conceptual errors for the price of one.
"Star Diffraction Spikes". Whoa! Happy painting!
There is a subforum here titled "Wish list ... post them here and discuss them with us". Yesterday I quickly walked through all the pages of this forum and tried to count the number of threads with "0 replies". I found so many, many of them.
Admittedly, the "post them here and discuss them with us" part may create too much expectations. I'll try to replace it with something more realistic.
Sorry if I cannot fulfill your expectations. I simply cannot do more. I am the only man who writes and maintains all of the code for the core application, all of the code for a 99% of the tools, a 100% of the development frameworks, who does all of the necessary work to keep the licensing system and the website running and, with the help of a few friends, performs most of the administrative and user support tasks. Besides that, I try to have a relatively normal personal life. No, this is not XXX Incorporated, YYY Corporation, ZZZ International, or anything similar. This is essentially the work of one person. And I am 52, with too many years spent writing code 12-16 hours a day and lots of nights without sleeping. You know, I feel that I simply cannot work nearly as hard as just five years ago. Nothing has been or is easy for me, very especially within the astrophotography community. With the exception of the users who have bought a license, some good friends and a bunch of people who have written new tools, scripts and tutorials, I owe nothing to anybody. I would like to be able to write more documentation (even if an overwhelming majority of users never reads it), more development documentation (very necessary in the medium-long term - I am trying to work on this now), to implement GPU acceleration, zoomable/scrollable fast real-time previews for all tools, and so many fancy things. Unfortunately, I have to invest my time and resources to pursue smaller goals with much less wow effect. And one of these goals is an extremely nice INDIClient project started by a good developer, who is working on it for free in his spare time, including weekends. For sure I am going to help and support him.