Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - Greg Schwimer

Pages: [1] 2
Running 1.8.7 on both OSX and Linux (mint). I'm seeing that the statistics process misreads an image cropped with dynamic crop. I can reproduce this across my two systems, but not on a 3rd system running previous build 1457 on OSX.

Steps to reproduce:

 - open an image - I'm using a dark sub
 - open statistics, select the check at the lower right - things look OK
 - crop the image - I use dynamic crop, reset the settings, set a width, height of 100,100 and apply with the check
 - statistics show the count of the pixels count unchanged (should be 10000). Pixel % changes to an unlikely number.
 - none of the other statistics change as you might expect
 - open pixelmath, select "create new image", and use $T as the expression to copy the above cropped pixels to a new image
 - statistics for the new image appear to be correct

Closing and re-opening the statistics process does not fix this. Saving the cropped original and re-opening it does.

Anyone else see this?

Kind of a guy check request here and maybe a discussion for adding this feature to BPP.

When creating a master bias or dark with BPP with frames that all have the PEDESTAL keyword, the master frames are saved without carrying forward the PEDESTAL value to the created files.  Use of these masters results in misapplication of them against light frames due to the missing PEDESTAL keywords. In my case this was causing overcorrection of flats against the lights.

I solved the problem by adding the keywords into the BPP created bias and dark masters.

Anyone else run into this?

Perhaps a future update of BPP can carry forward the PEDESTAL values from the subs and be added to the resulting masters?

Bug Reports / HDRMT does not change image
« on: 2019 June 13 22:02:22 »
Saw this post:

I'm having the same problem. I can reproduce it on both OSX build 1456 and Linux build 1475. The key to reproducing it seems to be to enable median transform in HDRMT.

Update: The problem seems to occur with mono images. It otherwise operates on RGB and such.

I have 13 hours on the Horsehead Nebula in Ha and I need some ideas on how to deal with the multiple reflections present in the data. I'm assuming the reflection is from Alnitak (pretty sure actually). Here's what I'm seeing (very stretched):

This is a copy of the master frame:

I haven't put much effort into fixing this yet, but one idea I have is to use range selection to create a mask, then use curves to match the reflections to the surrounding areas. Another is in using wavelet layer processing to target them. For example, if I use the ExtractWaveletLayers script I can see the reflections quite well. Maybe there's a way to target them that way?

Does anyone have any other ideas on how to handle a problem like this?

General / BackgroundNeutralization causing star core saturation?
« on: 2016 December 08 22:18:47 »
Hi. I'm processing a quick run of subs pointed at M103. I expressly set the exposure for all 3 channels to the longest time I could go in any channel without exposing any star core. I preprocessed the data and the resulting RGB images indeed shows that I have not overexposed anything - the B channel maxes out at 0.566.

In trying to help with this post:

... I lost focus and got through my stretch and noticed that I have saturation in the B channel. I tried a number of things including the items discussed in the thread above but ended up tracing the cause back to BackgroundNeutralization. After running BN, which I ran at the defaults save for a selection sample to represent the background, the G channel went to a value of 1.0. I can see the star in which this happens. ColorCalibration then shifts this to the B channel, and the G is no longer overexposed.

Mind you - this is one star in a field of stars that it did this to, but it's an important star for the image.

Changing the working mode in BN to "Target Background" seems to solve the problem with no apparent other impacts to the image.

Am I finding a corner case here or is this a fairly common occurrence?

General / Histogram "drop" after L combination <- problem or not?
« on: 2016 December 02 08:53:26 »
I have an RGB image and a synthetic luminance that I by integrating the RGB masters. This was done during the linear phase. After a normal initial processing (crop, DBE, BN, CC, etc) I stretch both the RGB and L masters using STF-based HT stretches. I then use ChannelCombination to combine the L.

The initial RGB histo looks fine. What I'm scratching my head about is the after L combination histo. The peak drops substantially. I've tried various initial processing variations - linear fit to different channels, no linear fit, NR, no NR, and in each case I see a large peak drop. In some cases the peak drop is low enough that it appears almost flat. I've also tried running HDRMT on both the RGB and L images prior to combination - same result.

Am I missing something? Is this normal behavior? Am I doing something wrong?


Bug Reports / Real-time preview hang, won't quit, locks files
« on: 2016 October 04 22:23:53 »
When using the real-time preview with CurvesTransformation, the preview window stopped responding. The "Show transformed image" button at the upper left was spinning for 30 minutes and never stopped. During this time period I received a number of error messages when attempting to rectify the situation, including:

When trying to close the real-time preview window:
The real-time preview is busy - cannot close it right now.

When trying to make a change to the image that the preview was showing:
image: the view is locked for write operation s

When attempting to quit PixInsight:
One or more views are currently locked.

Terminating the core application with locked views is a risky situation that denotes incorrect behavior of one or more installed modules. Perhaps you may want to fix the locked state(s) manually, if possible. Exit anyway?

The process console was stuck with the following message:
Code: [Select]
CurvesTransformation: Processing view: lrgb
Writing swap files...
351.764 MiB/s
Curves transformation: done
176.463 ms
oiii: Masking from swap files...

I managed to save a backup of the project file with no problem, after which I closed PixInsight. On closing, the OS indicated PixInsight had crashed and provided the attached dump log.

I have only seen this once and am not sure how to reproduce it.

I discovered this today. It caused a bit of a panic.

In short, if you open the description window of an icon and place it at the center of the PI workspace, then click File->Save Project while the description window is open, the window will stay on top of the save project dialog. This is not a problem (usually) as the Project Save dialog is relatively large and can be moved to the side. However, if you click OK to save, and the project file already exists, a dialog opens asking if you want to overwrite the existing project. The problem is that the description window prevents access to this dialog, which renders PI unusable.

My panicked experience here:

I managed to get the second dialog to come to the front by opening a different application that was smaller in size than the PI window. Having PI out of focus allowed the second dialog to be visible. By clicking on the second dialog header bar while PI is out of focus allowed me to gain access to the dialog and move it. This was the only way I found that would recover control of PI.

I'm able to reproduce this reliably.

I use real-time previews frequently. I also like to pin common processes to all workspaces so they're available when I switch during my workflow. It seems that while the process window does indeed become visible at all workspaces, the real-time preview window only works on the workspace on which it was opened, not all of the others.

For example, on workspace 1 I open CurvesTransformation. I pin it as visible to all workspaces. If I move to workspace 2 to work on something there, and try to use the real-time preview for CT there, I get nothing. I have to move the image that I'm working on to workspace 1 to use the real-time preview, or close CT and re-open it on the workspace I'm presently on.

Any chance it can be made such that the real-time previews will work on the workspace they are activated on, rather than the first workspace the process was started on?

General / SOLVED - HELP! Can't save my work!
« on: 2016 September 26 12:56:11 »
I'm kind of panicked about this..... ugh.

After several hours of working on a project, I opened a NoOp process and set some notes in it. My intention was to leave a reminder as to where I was in my processing. With the notes window open in a "committed" state, I selected Save Project. Now, I can't do anything. The save project dialog is open, as is the notes window, but I can't do anything with either of them. I can access all menus, but every option is greyed out. I can't change workspaces either to make sure that there is no open dialog waiting for input.

So, I can't save or do anything with my work.

Any ideas how to recover? Is there a CLI means to accessing PI externally that will force it to save off?

I'm running the latest PI version on OSX.

General / Process console log to file
« on: 2016 September 21 18:13:44 »
Does anyone know if it is possible to configure the process console such that it will tee whatever is written to it during processing to a log file?

General / "Uncalibrate" master dark?
« on: 2016 September 19 18:04:32 »
Having realized that BPP does not seem to like calibrated master darks, I'm now wondering if it is possible to "uncalibrate" the master dark by simply adding the bias back in? I do not have the source subs for the dark, which would be idea. I'd try it out but I'm in the middle of several projects and need to stay focused.

Anyone ever tried it? Mathematically it seems feasible.

General / BPP master dark expectations (flats overcorrecting)
« on: 2016 September 19 09:44:57 »
I've found a few threads that loosely indicate what I think I've found. This thread is in part for ease of future discovery by others, and in part for my own knowledge. If others would please confirm what I'm seeing it would be greatly appreciated.

Regarding BPP - it seems that it expects a non-calibrated master dark (i.e. not bias subtracted).

I've been manually preprocessing my data for a while. I have a master library that I reference frequently.  For the first time in a while I ran BPP. I fed it my master frames and lights and let it rip. The result was I suppose you might call an overapplicaton of the flat master to the lights. That is - my master flat (which was created by BPP in the first place) grossly overcorrected the light frames. I ran through the process manually using the BPP generated master flat - no overcorrection.

I was able to reproduce the overcorrection by checking the "calibrate" box for the master dark in ImageCalibration. I also found BPP outputs the following further confirming what I'm seeing:

Code: [Select]
Applying bias correction: master dark frame ...
The result of subtracting bias twice from the master dark seems to push the flats to overcorrect.

So it seems that if you wish to use your master frame library for both BPP and manual preprocessing you should at the very least not calibrate your master darks. I see no log indication of similar behavior of BPP with master flats so I'm not sure if it's doing the same to those. Anyone know?

I noticed I get different results when I re-use DBE settings by dragging a previously saved icon to the image I want the process applied to as compared to double-clicking the DBE icon and applying it to the same image. Is this expected behavior?  Here are two results. Does the DBE process need to be open and active to perform properly?

DBE process open and the green check box clicked to apply DBE to the image. Results are consistent with expectations.

DBE process icon dropped on clone of same image. Results are not acceptable.

Image Processing Challenges / HDRComposition odd behavior
« on: 2016 June 08 14:27:55 »
I've used HDRComposition in the past with great results. This result has me scratching my head. In short - I'm trying to recover star core levels.


I have a master lum stack of 26x600s exposures. Some of the star cores are blown out. I want to use a stack of 60s exposures w/ HDRComposition to fix the cores. Both images are aligned, etc, and are linear. When using the HDRComposition tool, I get a posterized result that looks like this:

Clicking on the 24-bit LUT button makes it look much more normal, but upon further inspection the resultant image just doesn't seem right.

Non-HDR Stats
count (%)   100.00000
count (px)  64800000
mean        268.152
median      259.901
avgDev      12.871
MAD         4.032
minimum     219.150
maximum     53755.856
HDR Stats
count (%)   100.00000
count (px)  64799998
mean        2.59504655351
median      1.95303148111
avgDev      0.86339209736
MAD         0.19324242504
minimum     0.21498097293
maximum     64242.46716072602

I find these results confusing.

I've tried various means of getting this to work - disable exposure evaluation, order the files differently, adjusting the binarizing thresholds, allowing black pixels, etc. Nothing produces a usable result. I actually found this out the hard way the first time I processed this data. I clicked the 24 bit LUT button, things looked OK, so I proceeded. When I tried to stretch, well, it wasn't pretty.

Source images:

   Master Lum:

   Stars Lum:

Pages: [1] 2