In a day or so NASA will release the raw image data from the Juno probe of Jupiter's great red spot, taken July 10th on its recent flyby - it was REALLY close, only some 5800 miles above it. I was wondering if anyone here has developed a PI script to process the raw image - a 1648 pixel wide by 384xn pixel high "strip" where each of the n 384-pixel high frames contains 3 x 128 pixel high blue, green and red filtered mono images, or "framelets" - into a single colour-combined image.
The only unusual part for PI is separating the n x 384 pixel high frames from the raw image "strip" into separate image files, then sub-dividing those frames into three 1648 pixel wide by 128 pixel high "framelet" images to separate the colours. After that, you have to stitch together the same-colour framelets to create a set of three full sized filtered images. Then, standard PI colour combination processing can be done.
There are some purpose-built programs to process these images (and do a lot more as well) like SPICE and ISIS3, but after looking at them, they're more than I want to learn for now. I can do all the steps manually using various tools, but creating finished 3-colour images of NASA-supplied raw image data from space-faring probes seemed like it might fit into the fringe of PI's baliwick.
I was just starting to fiddle with PixelMath to see if the necessary functions exist to automate separation of the framelets and stitching them together when it occurred to me somebody here may have already done it. Anyone? Or is my application just a little too far outside the design parameters of PI?
Failing that, has anyone here developed a python or perl script to do bulk frame/framelet separation and stitching tasks? Any suggestions welcome.