PixInsight Forum (historical)
PixInsight => General => Topic started by: sreilly on 2018 May 14 14:47:37
-
So I have an imaging system at home that has a 12.5" OGS RC with a STL-11000 and run a system at SRO for a friend that has an OGS 12.5" RC and STX-16803 camera. At home I use 15 minutes for my 1x1 images and at SRO I use 10 minutes for the same. As far as processing them goes each has their own darks, flats, and so on and done separately as should be and then aligned as such. I experimented this morning and took a group of 6 from each camera on the same target to see how well PI would align them as one group and it did them well without a problem as expected. It just happens that both sets are at a PA of 0 degrees so there's no rotation but I don't expect that would be a problem if they were different. And I expect that the proper way to combine these images would be by using the HDRComposition process to combine both sets of data. I haven't finished processing all the raw data yet but should have that finished in the next few days hopefully. This is a project I've been working on since last year so there is a ton of data to work with from both sites. But the real question is if these images are in the same range as far as signal is there a real need for HDRComposition? I understand the need when I did M42 but this isn't like that with massively different ranges in signal. Any thoughts?
And for the record for those that don't know, the STX-16803 covers the entire STL-11000 chip with extra area above and below.
-Steve
-
Steve
I would use ImageIntegration. The tool requires 3 or more file to work and the work around for that is simply to make a copy of each image so you have 4 images. I'm not sure the parameters you would set but it should be fairly straight forward.
Mike
-
yes i would also just use II. HDRComposition is really for instances where you have overexposed parts of longer exposures. i doubt you have any overexposed pixels from either system.... right?
rob
-
I regularly combine master integrations from different scopes using ImageIntegration and it works well.
Since you have the same image scale for both sets of data it may be worth attempting to combine the subs from both cameras in a single integration. I'd try that too and see what produces the best result.
Cheers,
Rick.
-
As I get the data reduced I'll start combining and see what the end result is using just the plain vanilla combine and HDRComposition. May prove interesting. I have a ton of data for multiple projects from both systems.
-Steve
-
Dear Steve,
Hi - I've done some experimenting with this in trying to combine data from three different scopes, each with their own cameras etc. I found that Image Integration was NOT the best way to combine the data.
Of the three cameras we had, one had a much lower gain (ADU/electron), and hence a lower signal. Image Integration gave those frames a much higher weight than the other two, despite the fact that the S/N was much the same.
What seemed to be happening was that since the signal was lower, the variance in the signal was a lower, which II took to mean that the image was higher quality (more photons) and hence increased the weight of those subs. This is just a guess, but even if this is not correct, the effect was the same - we were getting too high a weight from the cameras with the lower ADU/electron.
Instead, we developed an alternative approach - the idea is to make sure we equally weight each counted photon (electron). With this approach you don't have to take into account differences in exposure length, or quantum sensitivity since these show up in the counted photons in each sub. However, you do need to take account of the number of subs, the different gains, and the size of the sensors (described by the image scale, arcsec/pixel).
The last one is interesting and a side-effect of the way PI deals with aligning images with different scales. Imagine one camera has a scale of 2"/pixel, the second 1" per pixel and you have aligned the 2"/pixel to the 1" image (which preserves the detail where it exists). The 1"/pixel may have captured 10 photons. All other things being equal, the 2"/arcsec pixel would have captured 40 photons. When aligned to the 1"/pixel image, PI will effectively create 4 sub-pixels, but all with the a signal level corresponding to 40 photons (PI uses a form of interpolation). What this means is that you have to divide by the area of the pixel to take out this effect.
So our process was:
1) Everybody stacked their image with their own flats, lights, darks etc. as required, optimised their own stacking and produced a master.
2) The masters were then aligned to the highest resolution image so as not to lose the fine scale data where we had it.
3) We then combined the images in pixelmath, with a weight equal to: Nsubs/gain/PixelArea
Nsubs - the master contains the "average" number of ADU per sub - multiplying by Nsubs gives you the total ADU detected.
gain - in ADU/electron - dividing the signal by the gain converts to electrons and hence detected photons (if you have a gain in electrons/adu, then multiply by this instead).
PixelArea - as discussed above. Don't forget to take account of binning (if not 1x1) in determining the area.
This definitely produced better images for us, though since we all processed the images separately, it was obvious that the skill of the processor was more important than the marginal gain we got from optimising the stacking process (and sadly, our best processor used PhotoShop and couldn't really tell us how to improve our processing in PI!).
Hope this helps,
Colin
-
Hello Colin,
That's an interesting approach. The STL-11000 has A/D Gain of .8e while the STX-16803 has 1.27e. All subs are binned 1x1 and both cameras have 9 micron square pixels and both scopes are the same optical system, 12.5" OGS RCs at f/9. Calculated filed of views show 0.62 arc second per pixel for both. Area is of course different with the STX having all the coverage of the STL plus additional above/below depending on the orientation. The STX has a coverage of 36.8 mmx36.8 mm while the STL is only 36 mm x 24.7 mm. I'll print out your reply and keep in mind when processing.
Thanks for the reply,
Steve
-
I hope I'm understanding your situation correctly. I have been combining data from 2 and 3 cameras at once for about 6 months now, I wish I had seen this post earlier and hopefully this might still be of some use.
First I create a master registration frame from each camera.
I pick a few of the best images from each camera and stack these to create a reference frame.
Next I use Star Alignment and select the (register/union - separate) function with (Frame Adaptation) checked.
This will fit each image to the reference, you may want the reference to contain the highest snr.
Next you use Dynamic Alignment to merge the registered union separate images. Select image 1 and 2 then select stars in the corners and the center of the image. You may need to manually train the second image on the first 2 stars. Run the tool to get a master mosaic.
If there are more than 2 images then you will need to add each consecutive image to the mosaic created by the first 2 images.
Now take your master mosaic and use the registration tool again and re register all of your data to this. Be sure to keep the Frame Adaptation icon checked.
Finally Integrated everything, I prefer to use linear fits and large scale pixel rejection. Its a good idea to spend time fine tuning with the roi tool over a small area at this step, especially if you are using several hundred subs. I have found that this gives me a very clean stack with smooth edge transitions. If I image over 2 or 3 nights with a dual camera setup, this technique helps out alot.
-
not sure if this is what the OP is after, but in your case why don't you just use StarGenerator to make a synthetic reference image with the right center coordinates, desired image scale and size and use that instead of the first 3 steps?
rob
-
Ive tried that a few times but the ram maxes out and the computer will just computer for hours. It never actually registers the image. Ive tried reducing the magnitude of the stars of the synthetic field but it seems if I go below mag 10, I get an error in star alignment and it wont work. I think it might be because I am under sampling, but even so I select the fwhm of my stars and have never been able to get it to work. It would be alot easier if it did though!
-
i guess you can also use the MosaicByCoordinates script which does the same thing internally - i've never had it fail due to resource limitations (in fact i don't think it's ever failed...) but maybe you are dealing with super widefield images? even still i just saw an example of a really wide-field image done with MBC so it should work.
rob
-
I'm trying to create an image from two sets of data taken with the same scope. One set of data was taken with a 16803 sensor and the other was taken with a 16200 sensor. All of the data is quite clean. I've calibrated the frames and selected a master image from the 16200 camera. I cannot figure out how to get the 16803 data rescaled and aligned to the master 16200 frame. I've tried just about every option that I can think of but the alignment process fails every time with the message:
*** 0 star pair matches found - need at least six matched stars.
*** Error: Unable to find an initial set of putative star pair matches
<* failed *>
I must be missing something simple. Can someone tell me how to align the data so that I can integrate the whole set of data?
Thanks!
John
-
are the FOVs and scale quite different? i guess the 16803 is physically bigger, right?
if so you can open the wider-field image and define a preview on it which approximates the FOV of the other chip (hopefully the wider-field image is the master you're trying to use as the reference) and then set the SA reference to be the view rather than a file on disk. i think "restrict to previews" is checked by default in SA so it will just pick up the preview.
regardless it's a little weird since the pixel size and dimensions of both sensors are somewhat comparable (right?).
you might have to upload a couple of images...
rob
-
Rob,
Thanks for the response. The 16803 data is 4,000 x 4,000 px with an image scale of 0.478"/px and the 16200 data is 4,500 x 3,600 with an image scale of 0.319"/px. The 16803 has a wider FOV but I'm trying to align to the narrower FOV so that the 16803 data is up-sampled to match the 16200 data--rather than the other way around. You can check out two of the files at: https://www.dropbox.com/sh/xo97we7hyqe5c74/AADQp-cI6-0Z5QEvPjSrapK5a?dl=0.
John
PS Notice that these two sets of data are mirror imaged. Back when I took the 16803 data I simply downloaded the raw data from the camera. Now I apply an image flip to correct for the mirror in the ONAG that I use. I've tried aligning the data both with and without the flip and neither approach works. The different image scale appears to be throwing the star matching algorithm off. I've also tried resampling the 16803 data to the same image scale as the 16200 data but that doesn't work either. I noticed that the data resolution in the file header doesn't get modified when the image is resampled so that may be what is throwing things off...but who knows? How do "hints" work. Maybe that's the secret?
-
John
I hope this is what you are looking for. I first horizontal mirrored the 16803 image. I then defined a preview in each image in roughly the same location. I set the 16200 image as the reference in StarAlignment and executed StarAlignment on the 16803 image.
Mike
-
to be honest i think that the problem here is that there's something wrong with the calibration of the images. if you zoom in on those subs you'll see that there are black specks all over the place in both images. seems worse in the 16803 image. in the 16200 image there are a lot of what appear to be hot pixels, and what's happening is that the star detector is thinking that many of those hot pixels are stars. faced with loads of bogus stars, SA can't find a match. however, when you define a preview like mike has done, that limits the # stars sent to the RANSAC stage and it does find a match. with images this similar there actually should be no need to define previews so i feel the actual problem is the star detection phase.
so i wonder if the calibration went wrong somehow, either possibly with the darks not matching or perhaps scaled too much.
anyway, i was able to register the 16803 image to the 16200 image by first mirroring the 16803 image as mike did, then without any previews, running SA after increasing Hot Pixel Removal to 2 and Noise Reduction to 2. it actually finds a solution with only hot pixel removal set to 2, but tries a lot harder. by increasing the noise reduction it solves right away.
anyway the flow for debugging problems like these in general is to set SA to "detected stars" mode and see what you get. if there are a bunch of crosshairs on things that aren't stars, it's a GIGO situation. you can try tweaking the star detection parameters to improve the detection quality and go from there.
rob
-
Mike,
Wow! That's exactly what I'm trying to do, except that I can't get it to work. I think that I did exactly what you did and I get the errors shown on the first screen shot. The second screen shot shows the previews that I picked. I'm loading the 16200 view as the reference and I've tried aligning the preview and the full image and neither works. What other settings did you use?? I'm either missing something or something is screwy.
John
-
to be honest i think that the problem here is that there's something wrong with the calibration of the images. if you zoom in on those subs you'll see that there are black specks all over the place in both images. seems worse in the 16803 image. in the 16200 image there are a lot of what appear to be hot pixels, and what's happening is that the star detector is thinking that many of those hot pixels are stars. faced with loads of bogus stars, SA can't find a match. however, when you define a preview like mike has done, that limits the # stars sent to the RANSAC stage and it does find a match. with images this similar there actually should be no need to define previews so i feel the actual problem is the star detection phase.
so i wonder if the calibration went wrong somehow, either possibly with the darks not matching or perhaps scaled too much.
anyway, i was able to register the 16803 image to the 16200 image by first mirroring the 16803 image as mike did, then without any previews, running SA after increasing Hot Pixel Removal to 2 and Noise Reduction to 2. it actually finds a solution with only hot pixel removal set to 2, but tries a lot harder. by increasing the noise reduction it solves right away.
anyway the flow for debugging problems like these in general is to set SA to "detected stars" mode and see what you get. if there are a bunch of crosshairs on things that aren't stars, it's a GIGO situation. you can try tweaking the star detection parameters to improve the detection quality and go from there.
rob
Rob,
Yeah, I'm not sure what's up with the black specks in the 16803 data. I'll have to go back to add a pedestal to make sure that nothing is getting clipped when that data is calibrated. I snagged calibrated data from 3 years ago so it may be a challenge to fix that. When I first tried this, I spent some time refining the star selection parameters and I see crosses on 99% of the stars. As you can see I set the hot pixel and NR to 2 as well with no success. The calibrated 16200 data looks 4 orders of magnitude better than the raw data so I assume that any remaining hot pixels are due to uncorrected hot pixels or cosmic rays. Other than adding a pedestal to fix any clipped black pixels, I'm not sure what I can do about any residual bright pixels. Clearly, you guys can get it to work so I'm doing something wrong here...
John
-
i had in the star detection pane, from top to bottom:
5
0
2
2
-1.0
0.8
0.5
1.0
these are the defaults except for the hot pixel removal and noise reduction.
with those settings i see 1207 stars in the 16200 image and 2932 stars in the 16803 image. with everything set to defaults, there were 1817 stars detected in the 16200 image and 4395 stars found in the 16083 image. most of those 4395 stars are bogus of course and i think that's the root of the problem.
rob
-
John
I had used defaults but I decided to do the process one more time Reset StarAlignment to defaults and repeated the process with the same results.
Mike
-
Rob,
Let me look at that. Here's what I see with my settings (8,3,2,3,3,0.3,.98,0.5,1.0). [That's just my current setting...I've tried quite a few at this point.] It is finding 101 and 80 stars in the two regions with 46 matches. It finds the angular alignment and then it falls apart. I checked and with the settings that I'm using it is seeing only stars. The screen shot shows what it finds. I have something set wrong because it is ignoring a lot of the stars so I'll have to figure out why. I'll try your settings to see what it does next...
John
-
Rob,
I tried your settings and it finds a lot more stars. I checked and again it looks like it is picking up real stars (many are super faint.) Here's what it reports before it fails. 446 and 388 stars in each region. That should be enough...yet it fails.
John
-
are you using the same two images that are on dropbox? that's the only thing i can think of that might cause this. or - are your RANSAC settings changed from the default?
rob
-
Rob,
Yes, I'm using the same images that I put in Drop Box and I'm using the default RANSAC settings. I don't think that it should make any difference but I'm running on a MacBook Pro and I've checked that everything is up-to-date. This is very strange. I'm going to go off to mess with re-booting everything and fooling with it some more. I agree that both you and Mike have made it work so it's not Voodoo. Something is screwy.
Thanks for all your help and advice!
John
-
here's one more data point:
i applied CC to both images with auto detect - hot sigma 2.1 and cold sigma 1.4 (kind of aggresive i guess but only 1000s of pixels were picked up), then opened the two new images. i flipped the 16803 image horizontally, reset the SA interface and then aligned the 16803 image to the 16200 image without changing anything in SA. it took 15 tries but it found the solution...
rob
-
Rob,
Ah...thanks, that's good to know. I'll try that to see if I can get it to work. Ultimately, I think that I'll have to revisit the image calibration step. I agree that it needs to be cleaner. I generally let dithering and the stacking filter clean up this stuff but that's clearly not going to work in this case.
John
-
I just ran this without any previews and it solved on the 15th try. Sorry for the low font dpi but I wanted to show the whole process in the process console. Open files, mirror horizontal and just defaults for SA.
Mike
-
Rob,
Bingo! I increased the RANSAC tolerance from 0.0 to 0.6 and it worked first try. Now I see that you have the same parameter set to 2.00. That appears to have been the problem. Thanks a million for all of your help! A++.
John
-
so that's what i was saying - 2.0 is the default ransac tolerance. it seems you must have some saved process icons that you're loading SA from?
something that's weird in all of this is that i swore that SA could handle mirrored images as well, but it appears that's not the case, at least here.
rob
-
rob
I've had some images work when they are flipped or other cardinal rotations and some that don't. I think the noise and or hot pixels are the probable problem. I just got into the practice of doing a fast rotation(no interpolation) for all images effected in a set to make life easier especially since I like to use Blink before doing any calibration.
Mike
-
so that's what i was saying - 2.0 is the default ransac tolerance. it seems you must have some saved process icons that you're loading SA from?
something that's weird in all of this is that i swore that SA could handle mirrored images as well, but it appears that's not the case, at least here.
rob
Rob,
I've never touched the RANSAC tolerance parameter and I thought that zero was the default. I just opened the process and that's how it's set, but now I know...
John
-
rob
I've had some images work when they are flipped or other cardinal rotations and some that don't. I think the noise and or hot pixels are the probable problem. I just got into the practice of doing a fast rotation(no interpolation) for all images effected in a set to make life easier especially since I like to use Blink before doing any calibration.
Mike
Mike,
I've had the same experience. Sometimes it works and sometimes not so I often do the fast rotation first.
John