I have a stack of 18 images. One of them is rotated about ~15-20 degrees from the rest. Will that show up in a stacked image as if it was a black frame in the corners, or is the image integration process smart enough to only process the other 17 images in the areas where the last image is rotated out of the FOV?
I realize that I can experiment, but I'm early in the acquisition process and I was wondering if this is handled by the app or if it's too complicated to deal with different calculations for different areas around the image. In this particular case, I don't want to crop out the affected areas, so I'm guessing that the best bet is to just reject the frame.
Thanks