Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - pfile

Pages: 1 [2] 3 4 ... 270
16
General / Re: Misaligned stars??
« on: 2019 May 05 21:25:31 »
any reducer in the mix? is this an OSC camera?

rob

17
General / Re: Misaligned stars??
« on: 2019 May 05 19:54:44 »
there are a lot of reasons why this can be caused by the particular optics of your telescope.

what kind of telescope are you using, and is this at one of the edges of the frame, or on axis?

rob

18
your 2nd solution is good, it's a known technique (find sample points on a clean image, then apply to image needing extraction)

neither ABE or DBE can add noise to the image. but what happens is that since the DBE/ABE image is missing a (somewhat) constant offset, when you compute the STF for the DBE/ABE image, it is naturally more aggressive and reveals more of the noise that was already in the image. also, there's a certain amount of noise in the sky signal that was removed, but both DBE and ABE remove a smoothed version of the sky signal which leaves just the sky noise hanging around.

you can prove this to yourself by computing the STF for the pre-DBE'd image and saving it, then after DBE/ABE, apply that same STF to the new image and see what it looks like vs. re-doing STF on the image. you'll see that the image with the old STF does not seem as noisy as the post-DBE STF'd image.

rob

19
you can upload your masters to google drive or dropbox (or similar) and post a link here.

in the meantime i wonder if all of your subs are good as the gradient you have does not seem like light pollution. did you blink thru your calibrated subs to see how they all look?

rob

20
General / Re: L + L from RGB
« on: 2019 April 25 10:25:49 »
i'm not sure about integrating RGB and NB together - the filter bandwidths are really quite different. i assume your star color RGB images are pretty short integrations compared with the NB? that would also be a big SNR mismatch.

as for adding in Ha to the L, i wonder if the SNR in the dimmer parts of the Ha image could compromise the SNR of the pseudo L.

maybe others can comment on these topics, i don't have a good handle on whether its a good idea or not. my instinct tells me no but i suppose it depends on the relative quality of the NB/RGB data.

rob

21
that's a pretty good result under those conditions!

rob

22
General / Re: L + L from RGB
« on: 2019 April 24 16:58:59 »
so for my part i have stopped bothering with the L filter because i live in an extremely high LP area and i find that the L is just so full of gradients and other nastiness that the result is very difficult to work with, requiring at the least some heroic DBE. so if i do an RGB image, i just use the RGB filters and make a pseudo-L as we are discussing. although LP sources are starting to become more and more broad-spectrum, i feel like there is still some worth in the gaps in the astrodon G2E filter set with respect to sodium vapor lights. but i find that processing the L separately from the RGB is worthwhile, so i generally make a pseudo-L from the RGB images and proceed from there. hence there's nothing to combine on the L side since i have no L subs.

anyway, as you point out, by default ImageIntegration analyzes the noise in the input images and weights the images based on that analysis. so even if you have done a linear fit, that will be undone by the noise weighting in II. so IMO you might as well skip the LF step. juan has recommended the noise-weighted average in the past for production of pseudo-L, since it in theory results in the highest SNR "L" image possible from your RGB data.

i think if you have real L subs then you can create the a master L frame from them and then integrate that master with the master pseudo-L. II wants a minumum of 3 images, so what you can do is just add the pseudo-L master twice and the L master twice and then again do the noise-weighted integration.

the "mostly L and a few (possibly binned) RGB" probably works great if you have dark skies and can get extremely clean L images. since that doesn't work for me, i'm in the "all RGB" camp.

rob

23
there's no image analysis going on; whatever is in the FOV in the database selected gets annotated...

you just need more integration time, maybe you can see those galaxies  O0

rob

24
General / Re: L + L from RGB
« on: 2019 April 23 21:19:13 »
instead of ChannelExtraction i just integrate the R/G/B masters together to form a pseudo-L.

i think the problem with L* is that it is a perceptual luminance so at the very least you'd want to set all the RGB weights to 1 and the gamma to 1 in RGBWorkingSpace (and apply it to the RGB image) before you extracted L*. otherwise green will be over-represented in the L*.

anyway if you are using an OSC and have an RGB image, you could still split it into its constituent R/G/B images and then just integrate them together to get a pseudo L, then integrate the pseudo L with the real L.

rob

25
General / Re: Dark frames are not correctly subtracted
« on: 2019 April 22 10:25:51 »
yes, i should have linked to those threads. one poster even said that from camera boot to boot the starburst can be different, which if true, makes taking darks a little like flats - you would have to do it before or after each imaging session.

if you don't scale your darks, bias frames are not necessary. in order to scale the dark, the bias signal must first be removed from the dark (since the bias signal does not change with frame duration.)

scaling darks, even if they match the duration of the lights, may still be beneficial. what PI does is to iteratively scale the dark and trial subtract it from the light until the noise in the calibrated light is minimized. it does this over a small preview of the light. however when there are sensor artifacts like amp glows or this starburst that are independent of the dark frame duration, you just can't use dark optimization as the artifacts are invariably under-subtracted from the lights...

rob

26
General / Re: Dark frames are not correctly subtracted
« on: 2019 April 21 13:59:50 »
well when you have sensor artifacts like amp glows or if you are using a CCD with RBI preflash turned on, you can't use dark optimization. there's no way around that.

i don't use BPP regularly so someone else would have to chime in, but i think if you added some flat darks that BPP would try to use those to calibrate the flats. i think BPP does try to understand what the duration of the various darks loaded are and intelligently apply them. but not 100% sure.

rob

27
it's not a dumb question - DSLRs can be tricky.

i guess this all depends on how you have the RAW file handler module configured. if the image was debayered by the RAW module, then what you did is likely correct. however if the RAW file handler was set to open the CR2 file in raw mode, then the image was never debayered. check if the greyscale image seems to have a "screen door" look to it. if not, you are OK.

to do a batch process in PI you use the ImageContainer process, which is at the bottom of the Process menu. you load the input files into the ImageContainer gui and optionally set the output file name template and output directory. then you drag the triangle of ImageContainer to the desktop, creating a process icon.

ordinarily you'd configure your process to do what you want it to do, and then drag the triangle of that process to the desktop, producing another process icon. then you drag that process icon onto the ImageContainer process icon and away it goes.

however in this case the Image > Color Spaces > Convert to greyscale has no gui, so what you need to do is apply it to some random image open on your desktop, and then right-click the image and select "load history explorer". then, if you open the history explorer from the tab on the lower left of the screen, you'll see ConvertToGreyscale as the last step in the process history. you can click on the little icon in there and drag it to the desktop, then do what i described above. this trick works generally for all the PI menu options that have no GUI (invert, etc.)

having said all that, you can do astrometry from within PI. if you just use the ImageSolver script (after giving appropriate scale and location hints) your image will be annotated with WCS coordinates. the latest version of PI has a readout cursor which will read RA/DEC after an image has been solved this way.

rob



28
it may or may not - it depends on whether or not this particular sensor or camera downloads the image into RAM before it sends it to the USB interface.

if the imager or camera has some kind of ram buffer the cmos readout speed is probably the same regardless of usb2/usb3.

rob

29
General / Re: Dark File Woes from Altair Hypercam 183CProTec
« on: 2019 April 17 22:29:50 »
some cmos sensors do exhibit this weird starburst thing at the edge of the field - if it's in the darks that's what it is.

but there's no reason why it should not calibrate out, so in this case maybe it is a flare? impossible to know without seeing the darks.

rob

30
think about what bias and darks are and you will have your answer. how can an image taken where no light is allowed to reach the sensor be dependent on the optical train?

rob

Pages: 1 [2] 3 4 ... 270