Hi Sander,
I'm nowhere near any of my PI machines - hence the reason for postulating things as a 'thought experiment'.
Yes - assuming the FFT Registration script behaves in much the same way as the similar porcess in Registax, then it should work. And, probably more accurately than simply 'placing a dot at a bifurcation point', as I was considering doing.
However, if there is likely to be significant image distortion between the two images - as I expect there to be in the case of my retinal images, then I was imagining that the ability of DynamicAlignment to resolve these distortions might be the power that I was going to need.
I will need to look at the FFT script to see how it accomplishes things.
Even now though, I have visualised a similar scenario, where a user might be trying to re-align the likes of an AllSkyCam image sequence, where the 'normal' view of such a sequence, once animated, is to see the stars rotating overhead. However, perhaps the sequence becomes more interesting - for differing reasons - if the rotation is 'frozen' in the sky, and all other 'fleeting' objects (meteors, satellites, aircraft, UFOs, etc) appear over this 'frozen backdrop'. Perhaps FFT 'can' find points to assist with the image alignment, but perhaps it could be 'aided' by a user 'marking up' the reference points on each image. Yes it might be tedious, but the result might be worth the effort!
Really, the fundamental question is "How can I apply the RESULT of a Dynamic Alignment process (created by aligning image B to image A) to a non-associated image, C - accepting that images B and C must have the same physical dimensions ?"
I don't see that this can be done by 'saving' the DA Process Icon and re-applying an instance of that process - after all, this simply tells the DA process 'where' to look for each of the alignment stars in the two images. It does NOT (if I remember correctly) store the actual 'transformation matrix' to allow the transformation to be applied 'ad hoc'.
As I said, one obvious solution is to 'burn-in' some artificial alignment 'stars' onto the image, and then to align the data using those reference points. But that leaves the original data 'permanently scarred', which is not how we like to do things in PI, is it? Yes, the ability to use 'layers' would be a perfect workaround in this case - but not at the moment.
Can I use the 'alpha channel' to perform the alignment?
Sure, if I was working with a mono image (I am not) then I could add a 'layer channel' into - for example - the G channel, with my mono data in the R channel, and then I would just delete the G channel when finished.
Any thoughts anybody?
Cheers,