Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Greg Schwimer

Pages: [1] 2 3 ... 14
1
General / Re: PixInsight New Version Problem
« on: 2019 November 02 10:54:10 »
Tested in 1.8.8. Seems to solve the problems.

2
General / Re: PixInsight New Version Problem
« on: 2019 October 24 16:13:20 »
Juan,

Thanks again for digging in on this with us. I PMed you a link with data you can use to test the behavior out. I can also beta test 1.8.8 if you'd like. I run linux over here which I think you do as well so if that's helpful I'm up to the task.

- Greg

3
General / Re: PixInsight New Version Problem
« on: 2019 October 23 14:47:33 »
Here's an example of three methods of BPP processing of data with PEDESTAL -100 set by the imaging software in all calibration and light frames. I ran these tests in 1.8.7 only. The light frames are of a different object in this case, but the source calibration data is the same.

 - from_subs - BPP in 1.8.7 using subs only, no masters - overcorrection
 - set_PEDESTAL_minus100_187 - took the master calibration frames, added keyword PEDESTAL -100 to the fits header, re-ran through BPP using them - undercorrected
 - from_186_masters - same as above, except masters were created in 1.8.6 - corrected, fewer hot pixels too

The same BPP parameters were used for all of these. The last one appears to be a correct result. I'm guessing this is because in 1.8.7 BPP is subtracting the pedestal from the calibration frames.

4
General / Re: PixInsight New Version Problem
« on: 2019 October 23 13:56:15 »
Ah - yes, I see that. So the behavior is as expected, so something else must be at play. Also a factor (now that I think of it) is some workarounds I had to put in place to get the data to calibrate properly via BPP in 1.8.6. For thoroughness I'll share this post:

https://pixinsight.com/forum/index.php?topic=13887.msg83846#msg83846

I'll admit I'm not used to using pedestals so part of the problem may be related to that.

In the case of our data, all calibration and light frames have a PEDESTAL keyword with a value of -100. When running through BPP, I found that some dust artifacts were not being removed correctly (overcorrection) with the application of flats (via BPP). I managed to resolve the problem simply by taking the dark and bias masters, re-adding the PEDESTAL -100 value to the files, and using them to create master flats (with BPP). I then had to add the same keyword values to the flats. From there, the lights calibrated properly, or at least they appeared to come out with a clean result. I do, in fact, still get a clean result using these modified masters *if* I uncheck the Subtract Pedestal option in the Image Integration process. No luck with BPP as that option is not there yet.

This is a set of data specific to Albert's original problem stated at the start of this thread.

<>

I put a expiration on this download as I need to be considerate of those on the team I'm working with for this project.

I just re-ran BPP with the data from above from scratch (no calibration masters just subs) and it seems I still get the same initial result of overcorrection as with version 1.8.6. I then added the -100 pedestal keyword to the flat, bias, and dark masters and re-ran. I now have mild undercorrection. If I go back and use the masters I made in 1.8.6 using the process I mentioned above, everything is fine. This is possibly because BPP using Image Integration is subtracting the keywords when creating the masters, whereas the masters I made in 1.8.6 did not have the pedestal subtracted and I added the keyword.

EDIT: added additional test results.

5
General / Re: PixInsight New Version Problem
« on: 2019 October 22 14:54:47 »
Hi Juan,

Albert and I are working with the same data. I can reproduce this on Linux. It's related to the new Subtract Pedestals option in ImageIntegration and the default of Clip Low being enabled. The data in this case has a pedestal keyword value of -100 (yes, negative). Is it possible that Image Integration is using the absolute value of 100 and subtracting that rather than adding (i.e. -(-100) means add 100). I just changed the pedestal value in all of these files to 100 (not negative) and I get the same result so I suspect this may be the case. The workaround for this data set is to disable Subtract Pedestals.

I believe the negative pedestal may be related to SBIG driver settings. I don't have the camera so I can't confirm. If this is the case it may cause problems for some users working with low signal data like narrowband, as is the case for us.

BPP does not have these knobs in the intergration settings so there is no workaround for this there.

Greg

6
Update installed and fix is confirmed. Thanks for the speedy response on this!

- Greg

7
Running 1.8.7 on both OSX and Linux (mint). I'm seeing that the statistics process misreads an image cropped with dynamic crop. I can reproduce this across my two systems, but not on a 3rd system running previous build 1457 on OSX.

Steps to reproduce:

 - open an image - I'm using a dark sub
 - open statistics, select the check at the lower right - things look OK
 - crop the image - I use dynamic crop, reset the settings, set a width, height of 100,100 and apply with the check
 - statistics show the count of the pixels count unchanged (should be 10000). Pixel % changes to an unlikely number.
 - none of the other statistics change as you might expect
 - open pixelmath, select "create new image", and use $T as the expression to copy the above cropped pixels to a new image
 - statistics for the new image appear to be correct

Closing and re-opening the statistics process does not fix this. Saving the cropped original and re-opening it does.

Anyone else see this?

8
Further searching reveals this thread:

https://www.cloudynights.com/topic/562025-pixinsight-integration-and-pedestal/

So it appears this may be a known issue in cases where the PEDESTAL keyword is negative.

9
Kind of a guy check request here and maybe a discussion for adding this feature to BPP.

When creating a master bias or dark with BPP with frames that all have the PEDESTAL keyword, the master frames are saved without carrying forward the PEDESTAL value to the created files.  Use of these masters results in misapplication of them against light frames due to the missing PEDESTAL keywords. In my case this was causing overcorrection of flats against the lights.

I solved the problem by adding the keywords into the BPP created bias and dark masters.

Anyone else run into this?

Perhaps a future update of BPP can carry forward the PEDESTAL values from the subs and be added to the resulting masters?

10
Image Processing Challenges / Re: Non Linear PostProcessing
« on: 2019 August 13 13:53:33 »
Hi Marc,

Results seem similar. I went ahead and ran through it again, this time pushing the color and contrast a bit more in a similar way to how I did it previously. It's possible this is just how your camera presents the data and that's not a bad thing. From there it's mostly art. Someone else may indeed have a different spin on it than me.

The data does look pretty good overall. One think I noticed is some artifacts from satellites or similar radiating from the upper left corner. That can be fixed during integration or just by leaving the subs contributing the artifacts out altogether.

I added a process container below showing how I went about it. You can unzip it and load it into PI and run the individual processes to see what I did. For the masks I used the color mask script and an L mask which I created by extracting the L channel. The ~ before the mask in the process container means the mask was inverted when I applied the process. Explaining just in case and for others that are learning.

Le Tour was interesting this year. Alaphillipe was so close...

- Greg

11
Image Processing Challenges / Re: Non Linear PostProcessing
« on: 2019 August 13 11:25:04 »
What I'm mean is that the channels all seem fairly equal, so the color is coming across as mostly greys. It's possible I got to this result because I ran PCC over the top of what you already did but I don't think that's really it. Could be the nature of your camera, could just be that region looks that way. Could be I would get a different result if I had the unprocessed master. Hard to say. If you want to put a copy of that (from right after integration w/ no other changes) I'm happy to take a peek.

When I adjusted it I was going for the typical "milky way" look of reddish brown. To get this I found the best result by using a red mask (color mask script) and reducing the greens and blues ever so slightly.

Believe it or not I haven't imaged that area so I can't say for sure what it would look like with my camera.

12
Image Processing Challenges / Re: Non Linear PostProcessing
« on: 2019 August 12 16:40:48 »
Hi Marc,

I took a quick look. Seems that you had already run photometric color calibration on the image, but possibly without background neutralization enabled. Step back and give it a try with that enabled. Without it, on my screen the initial screen stretch (linked) has a magenta cast. I ran PCC again with background neutralization and that helped. Next I ran SCNR with the defaults to clean up the green.

Taking through an STF_based stretch and then pushing the saturation with curves I don't see a lot going on in terms of color. I attempted to draw some color out using the Color Mask script (under utilities) to create a red mask. I applied this mask, inverted it, and used CurvesTransformation to reduc the green and blue channels slightly, then worked the contrast a little. The caveat with this approach is that the stars are also going to have this applied, which you may not want. You can get around that by adjusting the mask to protect the stars, etc.

Results are below. The version on the left is post stretch with none of the curves work I mentioned.


13
Image Processing Challenges / Re: Western Veil Nebula Challenge
« on: 2019 August 06 21:36:27 »
I did a quick run through:

https://youtu.be/Vp9mwTPrlCs

Short version:

  - needs better focus
  - dithering would help a lot
  - get better at calibration to remove artifacts like hot pixels
  - don't oversaturate anything if you can help it, or build an HDR strategy

- Greg

14
Excellent! I'm glad we figured that out.

15
When I took my darks and bias frames I did not use the Ha filter because I didn't think it would matter since it was a "dark".

This is true.

Quote
However, I just hooked my computer up to my rig and this time set the Ha filter and took some bias and dark frames.  The median value came down to 200 from the 750 range which puts it lower than the light frames.

Verify all of the settings are the same as before the filter change. Also, if your camera does not have a physical shutter I'm guessing this could impact it. Not sure though.

Quote
My offset may have been set differently by mistake. It may have been at 15 for my lights and then 50 for my original darks and bias. It should have been at 50 for my lights. Does the offset have to be the same for darks, bias and lights?

I'm not sure. I don't know the details of how your camera works but I think you may be on to solving the problem. For calibration frames to do the best job they must accurately match the light frames. That is, all camera settings (binning, exposure, temperature, etc, etc) should be identical. This is the same as with a DSLR - if you change ISO, you need new calibration frames. An allowable difference may be with darks where they can be scaled from different exposures, but that doesn't always work as well as the "real thing".


Pages: [1] 2 3 ... 14