How can I use PixelMath to reduce the dynamic range of an image in a precise and linear way?
I have an model, it looks very much like a flat and is used as a flat using ImageCalibration on already calibrated images. The model maps the very slight variations of sensor sensitivity that a normal flat cannot achieve. This stage is absolutely required in order to extract very faint data out of the target, particularly faint galactic tidal flows and IFN
I am very close with it but I just need to make a slight linear adjustment to make it work as I want. But my knowledge of math and PixelMath means I can't get my head round how I can do this.
The stats of the image (model) are:
mean 0.1532721
median 0.1531901
stdDev 0.0005209
avgDev 0.0004960
MAD 0.0004876
minimum 0.1508893
maximum 0.1558288
Using the max and min from the stats:
0.1558288 - 0.1508893 = 0.0049395
I want to (for example) reduce the dynamic range by 10% keeping the min at 0.1558288 and the max is adjusted to 0.1513833 (10% lower) and the rest of the data is kept linear between those points.
I also may want to increase the range rather than reduce.
I have uploaded a bin4x4 version of the model to use:
http://www.mikeoates.org/pi/nir_special_flat_bin4_20170319.xisfCan anyone help?
Thank you,
Mike
PS: I have no idea why some of those numbers look like links!