Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - MikeOates

Pages: [1] 2 3
General / Batch Process MergeCFA ?
« on: 2019 July 02 06:04:16 »
I am processing individual CFA layers after using SplitCFA. But how can I combine them again as merge CFA does not allow batch mode and it takes an awful long time to merge each image sub separately? Note: I am not just putting the same CFA layers back in, but changing them about a bit.

For example using a DSLR with OIII filter, the red channel is just noise, while both green and the blue have good signal. I want to be able to copy the blue channel to the red, then proceed with Debayer, StarAlignment and Integration. If this is done before debayering the noise in the red channel in not mixed in with the data.



General / Question about Drizzle Integration.
« on: 2018 October 03 04:51:10 »
Lets say I have integrated 200 subs, the used drizzle integration, but I decide later to reject a few of the subs. Do I have to re-run integration, or can I just re-do the drizzle and omit the rejected subs.

I ask because I don’t know how reliant drizzle is on a previous run of integration. It’s just a time saver, especially if there are a few hundred subs to re-do.


General / Software binning in PI
« on: 2017 November 09 01:51:49 »
This subject was brought up in the Off-topic section, 6 years ago! (, not the ideal place for this question as it does not get seen much. Juan replied to this with a great PixelMath expression which works great except it does not work properly using ImageContainer as it creates a file saved to disk which is the wrong size, i.e. the size is not reduced. But it also creates a file on the desktop using default names image01, image02 etc. These are the correct files, but you have to do a save-as for each one.

When I have a batch of these files to process I have to do each one separately then do a save-as. This takes a lot of time and being a human process, I keep making mistakes!

Can this be done by some form of batch process / script ? (Note I can't do scripts or I would have a go)


PS: Here is Juan's post to make it easier.


Yes, Geometry > IntegerResample is what you are looking for. The default parameters (downsample/average) perform a 2x2 binning.

Here are only Average, Median, Maximum or Minimum as operation, but not Add (Sum).

Average and add (sum) are equivalent in terms of SNR increment. The advantage of average is that you can't get out-of-range pixel values.

If you really want to perform a 'pure' 2x2 binning by summing 4-pixel blocks, this is very easy with PixelMath:

- In the 'Symbols' field type the following:

Code: [Select]
x2, y2
This allows PixelMath to use x2 and y2 as variable names instead of as image identifiers.

- Click the Edit button to the right of the RGB/K expression field and write this PixelMath expression:

Code: [Select]
x2 = 2*x();
y2 = 2*y();
pixel( $T, x2, y2 ) + pixel( $T, x2+1, y2 ) + pixel( $T, x2, y2+1 ) + pixel( $T, x2+1, y2+1 )

The expression above computes the sum of each contiguous block of 2x2 pixels in the source image.

- Uncheck the 'Rescale result' PixelMath option.

- Open the Destination section and select the following:

* Create new image: checked
* Image id: the identifier you want; for example: binned2x2
* Image width and Image height: the dimensions of your original image halved. If your image is 4096x4096, you should enter 2048 and 2048 here.
* Color space: same as target
* Sample format: 32-bit floating point (recommended)

Execute this PixelMath instance on your unbinned image and you'll get a binned result as a new image window. As I've said above, the result is the same as if you used IntegerResample in average downsampling mode in SNR terms, but the range of values is different. With a pure binning procedure, if a 2x2 block in your original image sums up more than one, the resulting binned pixel will be saturated.

General / Arcsinh Stretch - I seem to have the wrong version!
« on: 2017 October 20 15:24:24 »
Just watched part of Warren's October 2017 Online Workshop where Ron Brecher is showing the new Arcsinh Stretch and I realised that it is not the same version as I have, yet I have got all the updates. Plus Arcsinh Stretch is not listed in the Process list under IntensityTransformations, but it is under <Etc> as Arcsinh.

How do I get the right version installed?

I am using V Ripley (x64) in Windows 10

Screenshot of my version attached.



General / StarAlignment Interpolation method?
« on: 2017 April 27 07:04:10 »
If I am going to drizzle after ImageIntegration, does it matter at all what Pixel Interpolation I select in StarAlignment?

I ask because if set it to Auto and different binned images are being aligned, ringing around the stars is the result, I know that selecting Bicubic B-Spline removes that ringing, but I also know that drizzle will also remove the ringing.

So is it best just to leave it set to auto if Drizzle is the aim. Or is there any advantage in using another interpolation method?



How can I use PixelMath to reduce the dynamic range of an image in a precise and linear way?

I have an model, it looks very much like a flat and is used as a flat using ImageCalibration on already calibrated images. The model maps the very slight variations of sensor sensitivity that a normal flat cannot achieve. This stage is absolutely required in order to extract very faint data out of the target, particularly faint galactic tidal flows and IFN

I am very close with it but I just need to make a slight linear adjustment to make it work as I want. But my knowledge of math and PixelMath means I can't get my head round how I can do this.

The stats of the image (model) are:
  mean 0.1532721
  median 0.1531901
  stdDev 0.0005209
  avgDev 0.0004960
  MAD 0.0004876
  minimum 0.1508893
  maximum 0.1558288

Using the max and min from the stats:
0.1558288 - 0.1508893 = 0.0049395

I want to (for example) reduce the dynamic range by 10% keeping the min at 0.1558288 and the max is adjusted to 0.1513833 (10% lower) and the rest of the data is kept linear between those points.

I also may want to increase the range rather than reduce.

I have uploaded a bin4x4 version of the model to use:

Can anyone help?

Thank you,


PS: I have no idea why some of those numbers look like links!

General / Calibrating darks - which method?
« on: 2016 April 26 09:42:49 »
There are two ways of calibrating darks and I would like to know which is the 'correct one'.

1. Integrate all the darks, then calibrate the master dark with the master bias.
2. Calibrate each dark sub with the master bias then integrate those together.
The calibration for both is performed in ImageCalibration

Each method produces a different master dark.
The left dark is method 1 and method 2 is on the right using the same data 15 x 30min darks. This screen shot shows each dark with STF applied.

Using statistics set at 16bit:

Left dark:
count (%) 49.74609
count (px) 4570798
mean 13.9
median 10.0
stdDev 130.5
avgDev 11.4
MAD 8.9
minimum 1.0
maximum 63473.0

Right dark:
count (%) 99.93509
count (px) 9182292
mean 12.793
median 9.733
stdDev 92.266
avgDev 8.926
MAD 6.919
minimum 0.067
maximum 63473.000



Bug Reports / PixInsight API Error 37
« on: 2016 January 25 11:10:33 »
I get the following error every time open a file and then click on the AutoStretch button on STF I have had this since upgrading to 1195 but I have just not reported yet. But seeing as noone else is reporting such an error, there must be something wrong with my install. Can you say where the problem may lie?

PixInsight API Error

GetViewPropertyValue(): Low-level API function error
API error code = 37:
Invalid user interface object handle

Right after the message, after I click on OK it performs the STF

It is the same for both new fits files from the camera or already processed images in PI that have been saved and opened again as xisf. Once a file has had a STF (with the error) any further autostretches work ok.

PI (x64)
Windows 10 Pro



General / Star Colour Fringing Problem
« on: 2016 January 12 10:20:43 »
I am having an odd problem, it's not new, I just want to get to the bottom of it so that my image quality can be improved.

The red filtered images do not register with the green and blue, even though I use just one master frame (in this case a blue image) to register all the different filtered image to.

The result when combined to an RGB image is a colour fringe of red at one side of a star and blue at the other. At first you would think it an optical problem resulting from coma error, but it's not. All stars in the image have the colour fringe at the same side, the fringe does not vary in the corners as you would expect for coma.

Focusing is also well controlled and I focus through the filter being used, initial focus with a Bahtinov mask then the rest of the session using the autofocus in SGP. Filters are Astrodon.

If I manually blink the three images the green and blue images are lined up, but the red shifts one or two pixels. What is more odd is the brighter stars seen to move a larger offset than fainter ones.

Has anyone come across this issue? are settings that need changing in StarAlignment? Or is that an optical problem?

The scope is a Takahashi FSQ-106ED & mono CCD camera.

The way I have 'corrected this' before is to reduce star sizes with a mask and MT, but I really want to sort this out properly.

The rgb image here is a crop made just after combine, I have made it none linear and enhanced the colour saturation to make it more obvious. The three other small crops are one from each filter after StarAlignment.



Has anyone produced a catalogue of M31 Globular Clusters for use in AnnotateImage?

I have a new image of part of M31 and I can see many globular clusters on it by comparing my image with one by Robert Gendler:

But I want to use AnnotateImage to produce my version of this.



General / ExtractWaveletLayers - how are these recombined again?
« on: 2015 December 20 08:46:44 »
How do you recombine the layers that are produced by the script ExtractWaveletLayers back to the original image. This assumes no processing has been done on those layers.

I was hoping to extract the layers, work on one or more layer then recombine them, or does it not work like that?

Tried adding in PixelMath with rescale on:
but that does not work.



General / LinearFit producing odd results
« on: 2015 November 28 05:11:48 »
I have an odd problem using LinearFit.

I have a 3 panel mosaic (landscape format) and after I had made the mosaic and combined the mono images to get RGB, one panel was rather green. So I went back to the separate integrated panels and did a LinearFit using the middle panel as the reference image. But the other panels turned posterized. See the image below showing the before and after with just STF active.

I also did the same by cropping the borders off, but that did not help. It was panel6 that originally had a green cast.

The .xisf  files for all three integrated green panels are here to try with:

Each panel is made from 15 x 600s subs. They were imaged on different nights so they are bound to be different hence the need for a LinearFit.

Using PixInsight

I must be doing something stupid! so I hope someone can help



Which image is the best one just by looking at the noise evaluation. By best, I mean the lowest noise. Visually, both images look the same, but later in processing any extra noise may start to degrade the quality.

* Channel #0
sK = 2.734e-004, N = 12625690 (34.35%), J = 4

* Channel #0
sK = 2.474e-004, N = 18730079 (50.96%), J = 4

I thought the lowest noise was the one with the smallest sK number, but I am confused as to why N is larger.

The difference between these two images is just the settings used in Drizzle Integration and I want to know which settings to go with.



Gallery / M101 & NGC 5477 (HaRGB)
« on: 2015 February 15 11:27:45 »
This is my latest image and is of the Galaxy M101. All processing is done in PI. For an RGB image to be taken in the light polluted skies of Manchester UK, I am very pleased with the result.

I have two versions, one is a closer crop of the full frame from my Trius SX 814 mono camera with a Takahashi FSQ-106 ED. Full details are on the following links.

Full frame:

Cropped version:

Processing roughly consisted of:
Make synthetic Luminance from all the subs. Process the Lum with Deconvolution, contrast, sharpness and noise reduction performed.
R, G & B frames are made into an RGB image, but I also just added the Ha to the red before the ChannelCombination as well as using LinearFit. Colour calibrated then processed to reduce noise and increase colour saturation.
L and RGB converted to non-linear before LRGB combination. I did some star reduction with masks and MorphologicalTransformation and then final tweaks to colour.


Wish List / Window focus issue
« on: 2015 February 09 03:54:29 »
I keep getting myself in a mess with one aspect of PI and I am wondering if I am doing something wrong, or if not I would like to add this to the wish list.

When pressing the restore button at the top right of every image window (the up arrow), the image that was minimized to an icon is expanded back the window size it previously was.

What I do then is perform some process or script only to find out (some times a lot later) that the operation was not carried out on the image that I had expanded, but to a previous image that had the focus.

Normally in Windows, when you restore a window, that window becomes the focus. Can Pi not operated like that?


Pages: [1] 2 3