Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - STEVE333

Pages: [1] 2 3
General / Starnet Problem
« on: 2019 September 30 16:09:00 »
I've been using Starnet successfully as a PI Process for some time.

However, when trying to run it today (9/30/2019) I get the error message  "Checkpoint file not found!" and the process doesn't run.

Any ideas?


General / Can't activate 1.8.6
« on: 2019 June 24 21:58:44 »
I've looked everywhere I know and can't find the info.

I've tried all the old codes I have, but, can't get any new activation codes.

Help. I'm 77, not a young computer engineer. Reactivation is hard!!!


Tutorials and Processing Examples / MaskGen script
« on: 2019 June 22 11:40:21 »
I've tried running the MaskGen script, but, without success. I just get an error message, and, the MaskGen window never opens.

Anyone know where I can find some instructions on how to use this script?

Thanks for any help you can provide.


I've had some very nice success using the new starnet++ program to remove stars from a stretched image. The following has worked for me"

I downloaded Starnet++ from the link below which was provided by Geethq.

I found the following works for Windows computers:

1) After downloading, the starnet++ files will be located in a folder named starnet. Don't forget to extract the files.
2) The program only works on 16 bit unsigned .tif files.
3) Save the file you wish to remove the stars from into the starnet folder (for example, FlamingStar.tif) in 16 bit unsigned format.
4) Open the Command Prompt.
- I typed capital C in the search box in the lower left corner of the Windows 10 home screen.
- In the popup window click on Command Prompt.
- In Command Prompt type cd\the path to the starnet folder\starnet (e.g., cd\Documents\starnet ) then hit enter.
- Type starnet++ OriginalFileName.tif StarlessFileName.tif where OriginalFileName is the name of the file you want to process.
EXAMPLE: starnet++ myimage.tif mystarlessimage.tif (notice that there are spaces between starnet++, myimage.tif, and mystarlessimage.tif.
- Once the filenames are typed in hit ENTER and wait till the processing is completer. Progress is shown as %done.

When the Command Prompt says Done, the resulting starless image will be located in the starnet folder.

Attached are examples showing an original image and the starless version produced by starnet++.

Using PixInsight, I typically do sharpening (MLT), contrast enhancement (LHE), color saturation boosting with CurvesTransformation, and noise reduction (TGV) on the starless image. Your processing software and enhancement choices may be different.

Once the starless image is "enhanced" I use PixelMath to combine the starless and original image with the simple expression

max(original, starless)

Hope this makes sense.


I have a Canon T3 DSLR camera. I use it with one of three different filters (to help with LP): (1) IDAS LPS-D1 for Galaxies, or (2) Triad Tri-Band filter, or (3) STC Duo-Narrowband filter for Nebulae.

I use an LED Tracing Pad as the light source for creating Flats. Some T-Shirt material is stretched across a Needlepoint Hoop to keep out any wrinkles. The hoop is placed over the front of the Dew Shield and then the whole assembly is pushed flat against the LED Pad. The setup is shown in the first attached picture.

With each of the three filters the R channel tends to have a significantly lower signal than the G or B channels (because of the filters and/or the Led Pad). My goal is to have all channels with signal levels between about 50% - 90% to ensure good signal-to-noise and good linearity.

I check the signal level for each channel by taking a Raw image and then loading it into PixInsight and looking at the image with HistogramTransformation (I'm sure other programs have similar tools). This presents a histogram showing the R/G/B levels in the image.

The second picture shows three Histograms: The leftmost histogram is for the setup shown in the first picture. Because the Red channel is too weak compared to the G/B channels I put a layer of Red cellophane between the LED Pad and the Needlepoint Hoop. I used Red Cellophane to attenuate the G & B channels more than the R channel. The middle histogram is for one layer of red cellophane and the rightmost histogram is for two layers of red cellophane. Notice how the cellophane tends to bring the G/B channels closer to the R channel.

The third picture shows the histogram for the setup with two layers of red cellophane and an exposure time of 0.6 sec. The Red channel still isn't up to 50%, but, it's close and that is as far as I went (I was getting concerned about possible multiple reflections with too many layers of cellophane).

The Flats have worked very well.

I hope this is useful for someone else.


General / Cursor Readout Preview Disappears
« on: 2019 March 13 09:35:47 »
Occasionally when working with PI the Cursor Readout Preview will somehow get turned off. It's a bit of a pain to have to navigate several button pushes to turn it back on. I use the preview often to read the RGB values.

Any ideas how to prevent this from happening?

I'm working on a HP laptop running Windows 10 with an i7 processor.


General / PI won't restart.
« on: 2019 March 13 08:43:48 »
Sometimes (not always) when I shutdown PI it will appear to shut down, but, it is still running in the background. It doesn't show up on the tray at the bottom of my Windows 10, but, if I open Task Manager PI is still running. If I try to restart PI it won't open. First, I need to use Task Manager to "End" PI, then, it can be started again.

Any ideas on how to get PI to fully shut down each time?


General / image width function in PixelMath
« on: 2019 February 28 19:05:06 »
Is there a function in PixelMath that finds the width of an image?

If yes, how is it used?


Oftentimes, at the end of a long image processing session, I will be happy with the final result except for dark halos around some of the stars. It is frustrating to have to go "way back" to try (not always successfully) to prevent the dark halos while retracing the processing steps. So, I developed this approach to remove dark halos around stars in the final image. The link below is to a tutorial showing how to use this approach.

I know this is a bit of a brute force approach (because it includes the use of the CloneStamp process), but, maybe you will find it useful. I've only used it a few times, but, so far it is working well.

Comments/suggestions welcome.


General / Question about my Flats approach
« on: 2018 December 01 13:00:44 »
I have a Canon DSLR (modified) and take all images with camera RAW (CR2).
  • After taking many Flat frames I load them into the ImageCalibration process (see first attachment) and calibrate them.
  • I load the calibrated frames into the ImageIntegration process (see second attachment) and integrate them into my MasterFlat.
The problem is that if I DeBayer the MasterFlat it doesn't look anything like the original Flat frames. In fact it looks almost black unless I use the STF to expand it.

Am I doing something(s) wrong?

PS: See my second Post for the second attachment. Sorry.

Thanks in advance for any help.


General / Process Icons won't load
« on: 2018 November 09 08:57:35 »
Can't load my stored Process Icons.
  • I right-click on the workspace.
  • Select Process Icons/Load Process Icons....
  • The list of .xpsm files opens.
  • I double-click on the desired file.
  • The list disappears, but, the Icons don't load.
I tried several different .xpsm files but nothing happens.

Has always worked before.

Any ideas?


General / How many dark frames?
« on: 2018 November 06 12:44:12 »
I've seen this question asked many times, and, the answers are always somewhat vague. No wonder, because, the answer depends on several factors that vary from user to user such as; Amount of Sky Glow (LP), Length of exposure, Camera Dark Current, etc.

I've created an Excel worksheet that takes all of the various factors into account, and, presents a graph showing the final stacked SNR for 2, 5, 10, 20, and infinite Dark Frames. See first picture below.

The vertical axis of the graph is the relative SNR for the stacked images from a nights data collection. The horizontal axis is Sub Frame Integration Time (in other words, how long is your exposure for each image).  The actual SNR isn't important. What is important is if your choice of Sub Frame Integration Time and # of Dark Frames is maximizing your SNR. 
In the picture below there are two dots on the graph.
  • The red dot indicates that collecting the data using a Sub Frame Exposure Time of 180 sec, and, processing the data with only 2 Dark Frames would yield a SNR of about 11.
  • If 20 Dark Frames had been used to process the same 180 sec Sub Frames data the light-blue trace shows the SNR would have improved to about 29, a significant improvement.
  • If the data were collected using 540 sec Exposures (rather than 180 sec Exposures) and 20 Dark Frames were used to process the data (the conditions for the blue dot) the SNR would be about 43.

Again, the absolute value of the SNR is not important. What is important is to understand that a higher SNR on the graph means a better final image (lower noise). For example, this allows you to determine whether or not you need to collect more Dark Frames.
The graph in the first image is for my camera/telescope and LP/filter conditions. Your graph will be determined by your camera/telescope/LP etc.

The worksheet (see second image) requires data from 2 Bias Frames, 2 Dark Frames and 2 Light Frames (Light frames MUST be registered to each other) all taken at the same ISO setting on the camera (I used ISO 800 for my Canon T3). The Light and Dark frames need to have the same Exposure Time (I used 180 sec for my data).

Each pair of RAW frames were loaded into Pixinsight and subtracted using PixelMath to create a Difference image. The Statistics process was then used to measure the stdDev of the Difference image with the units in the upper left corner of the Statistics window set to 14-bit to match my camera RAW output. You would change this to match your camera output. The result of doing this for all three sets of images will be: StdDev(Bias1-Bias2), StdDev(Dark1-Dark2) and StdDev(Light1-Light2). These values are entered into the Excel worksheet.
In addition you need to enter:
Total time to collect data =  the length of time you expect to be collecting data [time from the start of the first image till the end of the last image] (min).
t =   the exposure time for the Dark and Light frames used to measure the StdDev.
Dither and Download time =  the average time between the end of one image and the start of the next. Because of dithering and settling time there are about 60 seconds between my images.

If anyone is interested in the Excel worksheet just contact me with a PM and I'll be glad to send it to you.

Comments welcome.


General / DSLR Noise Calculations using PixInsight: Part 2
« on: 2018 October 27 14:38:13 »
In my last post there was a lot of discussion regarding whether Signal_to_Noise Ratio could be calculated using ADU counts or if it had to be calculated using e- (electrons). This is especially important for the Canon DSLR camera (and maybe other DSLRs) because Canon subtracts signal from the RAW data before outputting the "RAW" image. The amount subtracted appears to vary with the length of the exposure. This makes it difficult to measure the camera "Gain" expressed as e-/ADU. If the camera "Gain" isn't know, then, it isn't possible to do the noise calculations using electrons.

Although Canon alters the counts it doesn't appear to alter the noise (at least not larger noises). This means that noise measurements are still valid even though the absolute value of the signals isn't trustworthy (because of the variable offset).

In this analysis noise is measured as follows:
  • Using PixelMath subtract two similar images (Bias for example) to create a Difference image. I use the following expression in PixelMath {(Image1 + 0.1) - Image2} to prevent negative values that will be clipped.
  • Measure the statistics of the Difference image using the Statistics process with the range set to the bit level appropriate for you camera (14-bit for the Canon 1100D)
  • The StdDev is the noise measured in ADU units.
  • There is a StdDev value for each of the R/G/B channels.
Using this same technique for measuring noise, two images were taken at several different exposure settings while looking at a uniform (light box) target. The second attached image shows a plot of the noise at each exposure setting (Vertical axis) vs the SQRT[Exposure Time] (Horizontal axis). The result should be a straight line (for each color channel) if the noise is behaving as expected. The straight lines for all three channels shows the desired characteristic.

With this background, the second attached image presents a noise analysis showing that the SNR will be the same whether the calculations are done using e- or ADU.

BAD NEWS USING ONLY ADU: The calculated Read Noise, Dark Current, and, SkyGlow Current values will not be correct.

GOOD NEWS USING ONLY ADU:  The calculated SNR values are valid. This means that the optimum SubFrame exposure can be calculated without knowledge of the e-/adu value for your camera/ISO setting.

I've created an Excel spreadsheet that requires noise measurements from:
  • 2 Bias images
  • 2 Dark images
  • 2 light images (registered to each other)
It also includes the effect of the wasted "Download/Dither time" between images. The third attached image below shows the curve generated by this spreadsheet for my system using an IDAS LPS-V4 filter. It shows that my current SubFrame exposures of 180 sec are achieving only about 87% of the optimum achievable SNR. Of course, star saturation still needs to be taken into account when choosing the exposure.

If anyone is interested in the Excel Spreadsheet just contact me with a PM.

Hope this helps,


General / Calculating DSLR Noise Sources Including LP
« on: 2018 October 17 11:25:25 »
I've just started using an IDAS LPS-V1 filter to help with my high LP (Light Pollution). I image with a Canon T3 and was wondering if the new OPT Triad filter or the Cyclops Duo-Band filter would provide any significant improvements. This started me down the road to calculating the various noise sources in my images to see what effect either of these filters would have on the noise in the collected images.

After having reviewed some of the "noise analysis" articles it became clear that there are three main sources of noise in AP images:
  • Read Noise
  • Dark Current Noise
  • Sky Glow Noise

If anyone is interested I can describe the procedure for calculating these noise sources (using PI) based on 2 Bias images, 2 Dark Frame images, and, 2 Light images (Light images must be registered to one another).

The result of the analysis is the following equation:

S/N(t) = SQRT[T/{(RN^2)/t + DCcps + SGcps}]

  • T = total exposure time (sec)
  • RN = Read Noise (counts)
  • t = subframe exposure time, i.e., exposure time for each image (sec)
  • DCcps = Dark Current (counts/sec)
  • SGcps = Sky Glow (counts/sec)

This equation calculates the relative Signal-to-Noise of the integrated image (all images stacked) for a total exposure time of T (sec) when a sub exposure time of t (sec) is used. In other words, if T = 12000 sec (200 min) and t = 600 sec, then the equation will calculate the S/N for the summation of 12000/600 = 60 images.

For my Camera/Filter/Telescope/LP combination the result is (for an arbitrary total exposure time of 200 min) for the Red channel:
S/N(t) = SQRT[12000/{(15.77^2)/t + 1.13 + 34.8}]

A plot of the above equation is shown in the first image below.

One of the benefits of this graph is that it helps determine the proper Sub-Frame exposure time. This graph shows that Sub-Frame exposures of 120 sec or greater will produce the same final stacked result. Thus any exposure of 120 sec or greater would be acceptable for optimum S/N. This is the best technique I know for determining your best subframe exposure time.

With the above equation I can now determine whether one of the "dual band" filters would provide any improvement in stacked image S/N. The OPT filter will reduce the bandpass for the Ha signal from 18 nm for the LPS-V4 filter to 3 nm. This is a reduction of a factor of 6. This would reduce SGcps by a factor of 6. This changes the above equation to

S/N(t) = SQRT[12000/{(15.77^2)/t + 1.13 + 34.8/6}]

A plot of this equation is shown in the second image below.

Two points are clear:
1) The sub frame time needs to be increased to at least 420 sec to achieve optimum S/N, and,
2) The optimal stacked S/N has increased by about a factor of 2.2! Good news. Looks like the filter will definitely help.

The same analysis for the Green and Blue channels isn't as spectacular because their spectral passband is only reduced by about a factor of 3. However, the improvement still looks encouraging.

This has been a fun little trip into noise analysis. It has convinced me that either of the new dual-band filters would significantly increase my stacked image S/N.

Also, because the Sky Glow dominates my noise (even with one of the dual-band filters), a cooled camera would not provide any significant reduction in noise because the Dark Current noise is insignificant compared to the Sky Glow noise.

All in all this has been an enlightening adventure.

Comments welcomed.

Thanks for looking.


Hi - I'm thinking about getting the OPT Triad filter to help with my bad LP. I"m currently using an IDAS LPS-V4 filter and having some good success. The Triad filter has narrower spectral bandpasses, so, it should further reduce my LP noise.

Has anybody processed any images taken using the OPT Triad filter?

How did the filter work for you?

Thanks in advance for any feedback.


Pages: [1] 2 3