Hi Balt
First of all, wellcome to this world of the astrophotography, specially to this forum (and thanks for trying PixInsight)

A few tips/suggestions about your workflow:
- HistogramTransformation: Dragging the midtones to the first peak, leaving the Highlights and Shadows alone.
It is a good idea to set the white and black points of the image in your first HistogramTransform. I usually use the automatic crop, to a 0.01% of pixels, both sides, but you may just use AutoZero for the highlights to avoid losing significant data. About the midtones, there is no rule of thumb for it, like moving the slider to the peak. Use the RealTimePreview to see wich value likes you the most. I go for a background around 0.1, with the peaks aligned just a bit to the right. If you have not corrected the chip's response (i.e. white balance), then you must adjust each midtones value, for every channel, to get right the color balance. Of course, there is a new process that deals with this problem, but I haven't used it yet

- DynamicBackgroundExtraction: selected several areas in the picture that seemed to mostly have sky noise, and subtracted that.
Since you have not used flats, and if there are no significant sky gradients, you should perform a division instead of a substraction. This is done to your raw data, before the histogram adjustments. Otherwise, it is fine what you did, but working with linear data is a bit easier.
- Picture still seemed too green, so I used the ColorSaturation dialog to give the greens a downward kink.
The ColorSaturation process is not a good candidate for this task. A color cast indicates that the overall balance is wrong, so you need to correct it with the HistogramTransform, the Curves, the new process (can't remember the name...), or even PixelMath. Slight green hues, that appears even in a balanced image may be easily dealt with SCNR.
- Tried a HDRWT, seemed to emphasize the dark structures by quite a bit, but didn't look real, so undid that.
What is real?

Seriusly, we are too accustomed to what has been done early, and we think that this is normal. Take some risks, and see if your processing is giving the results you want or like, don't think about what others did. At the end, this is not science, it is a form of art. So, even when you keep truthfull to your data, do not cheat or make arbitrary things (like undiscriminate use of a cloning tool, manual selections/adjstments, etc) you still have a lot of liberty to do just what you want. Have fun, and get the nicest image you can

- Resorted to a regularized R-L deconvolution with default settings to sharpen things up a bit.
There is nothing wrong here, but I want to explain a little about the "philosophy" behind a deconvolution. Basically, it is a process that is intended to "reverse" the effect of something that blured your image, most likely the seeing, problems with optics, etc. So, what we want is to go back to a pure "original data", or reconstruct the image. For this reason, a deconvolution should be along the very first processes that are applied, when the data was still linear.
Of course that there is nothing wrong if you interpret a deconvolution just as a sharpen tool. I have used it for this very same reason even at later steps. But you have to be aware of what is happening to the image, so you may take some precautions, as protecting the stars with a luminance mask, etc. Also, if your goal is just a final "sharpening effect", you may consider other tools that are faster and have been specifically designed for this purpose: UnsharpMask, or even better, ATrousWavelets.
Now, there's a whole heap of crimes I'm sure I have committed in this process. How do the pro's do it? I think one step I'm missing is the screen transfer function, I can't seem to find where that's done. Also, I would prefer to do dark and bias frame subtraction as well as image registration all in PixInsight, but with 120 frames that is a bit tedious as there seems to be no automated way?
Not crimes

Just there are better ways

The STF is just an aid when you are working with linear data, to display it more conveniently on your screen. It makes no pixel adjustments at all, so don't get confused at this point. It is just a way to tell PixInsight how to draw the image, but without any real effect on the data, or how it will be written. When the STF is active, remember that this is not what others will see when they open your saved image. Once you have applied the first big midtones adjustment, discard the STF.
About automatic image registration, etc. It is in our plans. But cannot give any date of release

For the moment, use DSS, wich is a very good software.
Now, about my workflow...wow, I do a lot of things

But, you may divide all the processes into two sets: those wich work with linear data, and those who doesn't. The linear ones should be applied first, in any order, since the output will always be linear data too. Here may be deconvolution, color balance, background division or substraction, etc. When you are done, midtones balance, in any way (there are several methods). Then, my philosophy is "perform what the image says its needed". Just see it, and think what it needs to be better, and do that

There are some tips, like is better to reduce noise before sharpening, but at the end is just a matter of tastes and what is easier for you.