weirdness stacking hundreds of short subframes

prefetch

Active member
i'm wondering if anyone could shed some light on what i'm seeing here.

i've got 280 subframes each 10 second shots and i'm getting a strange result when i try to stack them.

first, this is what a single frame looks like using STF:

Screen Shot 2020-10-25 at 3.33.57 PM.png


and stacking with all the default values (except linear fit clipping) ends up looking like this under STF:

Screen Shot 2020-10-25 at 3.33.28 PM.png


i have another set of 280 subframes on a different target that yield similar results.

these subframes have only been registered - i wanted to keep things simple in trying to identify the problem, so i didn't calibrate them (but when i do calibrate them, similar bad results occur.)

manually stretching the stacked result is very difficult, and it appears to be very dim image and it's difficult to get enough contrast to even see it without the pixilation showing up:
Screen Shot 2020-10-25 at 3.42.11 PM.png


i would assume that i obviously need to increase my subframe exposure time from 10 seconds to something reasonable like 180, but in my mind i think "hey, what's the difference between 10 frames at 180 seconds vs. 180 frames at 10 seconds?

it just seems like a single frame actually looks better than a stack of 180 frames, which confuses me.

any insight that could be offered would be appreciated!
 
the normalization might have gone wacky - did you try setting a 24-bit STF lut on this integration to see what that looks like?

rob
 
the normalization might have gone wacky - did you try setting a 24-bit STF lut on this integration to see what that looks like?

whoa - i've never thought to try that. it looks like it made a big difference:

Screen Shot 2020-10-25 at 4.51.38 PM.png


i don't understand exactly what the 24-bit STF function is doing here, but it gives me something i think i can work with!
 
it's a quantization problem, which results in posterization - if the integration has a super-small absolute range between the dimmest pixel and the brightest pixel, there's just not enough levels in an 8-bit lut to map the image properly. lots and lots of values get mapped to the same value in the 0-255 range and thus you see posterization. when you give STF 2^24 bits to work with it can properly represent the data to the eye.
 
it's a quantization problem, which results in posterization - if the integration has a super-small absolute range between the dimmest pixel and the brightest pixel, there's just not enough levels in an 8-bit lut to map the image properly. lots and lots of values get mapped to the same value in the 0-255 range and thus you see posterization. when you give STF 2^24 bits to work with it can properly represent the data to the eye.

okay, that's probably the natural result of having very short exposure times - just not enough time absorb many photons. i was just trying to play around with a "lucky imaging" type technique on a night of particularly bad seeing and so i tried to keep the exposure very short.
 
Integration of these very low signal frames has produced a normalised image with a very low mean value which includes a bias offset so the actual integrated signal is in the lower significant bits. A bit of offset subtraction and non-linear scaling with PixelMath can put the image back in a range that doesn't need the 24bit LUT option:
1603670575457.png

With a bit of further histogram adjustment I got this:
1603670890217.png
 

Attachments

  • 1603670742008.png
    1603670742008.png
    267.4 KB · Views: 34
well i mean this is what the 24-bit STF lut is for. once you stretch the image, either via pixelmath or HT or any other stretching process, you won't need the 24-bit STF LUT (or maybe no STF at all, depending.)
 
thanks fredvanner and pfile. i think with pixel math and/or 24-bit STF i should be able to get this data processed.

thank you. much appreciated! :)
 
I wrote this tutorial on 24-bit STFs 8 years ago, when I first implemented them in PixInsight:


Here I describe look-up tables, screen transfer functions and posterization in some detail.
 
A bit of offset subtraction and non-linear scaling with PixelMath can put the image back in a range that doesn't need the 24bit LUT option

But this is a nonlinear transformation. The purpose of 24-bit STFs is to allow working with these problematic linear images directly, i.e. while they are linear. You need linear data to apply many algorithms such as deconvolution, PSF modeling, multiscale transformations (which work much better with linear data), etc. Note that there is no way to 'put back' linear data in any range, since raw data are linear.
 
But this is a nonlinear transformation
I understand this. I was proposing the PM transform simply for visualisation. However, on further thought, is see this is no great advance. You could achieve a similar effect by applying the STF transform (or any other appropriate non-linear stretch transform) using PM (i.e. calculating explicitly pixel-by-pixel), but that would not have the speed advantage of the STF lookup.
A naive calculation suggests that fully populating a 24bit LUT is about the same effort as directly calculating a 4k x 4k image, so the real savings would come if the same STF is applied more than once. (Populating on demand might be more efficient - if the cost of testing does not outweigh the efficiency of blind complete population).
 
Back
Top