Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - dld

Pages: [1] 2 3 ... 9
1
The raw file has an extra row (5202 x 3465) while your dark, flat and bias is  5202 x 3464. With the extra 1 row, ImageCalibration fails with an "Incompatible image geometry" error. That's why I cropped the first row from your raw CR2 file, then saved as xisf and proceeded calibrating the cropped light frame using the provided dark, flat and bias.

Edit:
I'm under an older version of PI (1.08.05.1353) which uses dcraw for handling DSLR data.

2
Only one row is missing. I cropped the top row in order to calibrate your light frame. I manually calibrated your (cropped) light frame with none of the tick boxes checked (no "calibrate" and no "optimize" in the Master Bias, Dark and Flat section of the ImageIntegration process) and the resulted image seems flat. You have probably accidentally ticked one of these tick boxes, thus the over-correction. I debayered the calibrated light using the GBRG Bayer pattern (as a consequence of the cropping). See the attached image (heavily compressed to comply with forum restrictions).

In summary:
  • The latest version of PI will properly handle camera raw files and no messing with cropping and compensating with a different Bayer Matrix setting is needed.
  • I am sorry to say that but proper acquisition of calibration data is a must. Everything else is a workaround, only used when no access to useful data is possible.

Cheers and clear skies!

3
After a quick look at your files, here are my observations:

Your dark is zero-clipped. It contains a lot of zero values probably because you probably pre-calibrated your darks. Please read Bernd's guide: https://pixinsight.com/forum/index.php?topic=11968 and also take a look at Vincent's tutorial: https://www.pixinsight.com/tutorials/master-frames/.
 
Why are you cropping your frames (by 1 row of pixels)? This scares me a lot :surprised:

4
You are welcome, Steve,

The Convolution process can be saved as an icon like all processes by dragging the "triangle" of the process window. If you are getting the warning that "the current filter library is not writable", just save the filter library in a different .filter file in a convenient place, and not where PI is installed.

5
That's true. It is a blur filter. It can also be implemented as a Convolution filter:

Code: [Select]
KernelFilter {
   name { Blur (3) }
   coefficients {
       0.000000   1.000000   0.000000
       1.000000   4.000000   1.000000
       0.000000   1.000000   0.000000
   }
}

This is faster, probably because the Convolution process uses FFT. Rob's PixelMath expression is equivalent, modulo edge (boundary) effects.

6
Thank you Edoardo, a long time ago, I had the same idea, but you have implemented it and created a tutorial about it! Kudos!

7
General / Re: Price
« on: 2019 December 28 11:21:58 »
It is absurd when people are accepting paying the price of PI for a counterweight set (probably built by fairies, who knows >:D) while complaining for the price of PI, ignoring the level of engineering involved for writing scientific software like PI.

8
General / Re: Weird Dark Halos in Mono Stacked image
« on: 2019 December 23 00:52:57 »
Without specific details and/or an image it's hard to guess what's going on. How does the bright stars look like if you integrate without using a pixel rejection algorithm? (Pixel Rejection (1) > No Rejection). Have you tried using a higher Sigma high value?

9
Bug Reports / Re: PixInsight 1.8.8-3 Released
« on: 2019 December 15 03:14:45 »
Yeah, I have this same problem. Real shame as so will 417 million other Windows 7 potential users.
In less than a month Microsoft will no longer provide security and support for Windows 7 rendering it a potential security risk for all of its users. So the problem is Microsoft here. Their next OS is Windows 8.1 which is worse than 10 and will also be decommissioned in three years from now. If you were a developer, would you invest your time and efforts supporting a dying OS?

I am also reading that people using an unsupported OS are trying to find a missing dll. This is a great security risk. What if a ransomware infects your system? How many man-hours of data collection will be lost then? I am not a Windows fanboy but considering the security threats, the next sane solution is upgrading your system or switching to Linux.

10
General / Re: BPP or do it manually ?
« on: 2019 December 03 10:18:35 »
Image integration - Pixel Rejection settings ? - I could use the default settings suggested in the tutorial but how can I tell if these particular settings end up actually introducing
more noise into the final integration, OR, loosing valuable data and I've not even realized it.

Hello Paul,

while I was typing this, Rob gave some good directions. For one longer answer take a look at the corresponding presentation from Jordi Gallego.

While I don't consider myself a great astrophotographer, the best general advice I can give is: assess your sources (how credible is the material you are referring to), study, and try to understand the math. Take it step-by-step and experiment. Pick a single light frame and see if calibration has removed most of the hot pixels, or if it has corrected for dust and vignetting. Avoid complicated workflows involving deconvolution or local normalization. And this is half of the journey. Astrophotography comprises of a highly technical part and an aesthetic part which I believe is even more difficult to conquer.

11
General / Re: BPP or do it manually ?
« on: 2019 December 02 02:16:58 »
I pre-process and integrate manually only for one reason: To assess human errors during acquisition of calibration and image data, to learn from them and try to correct them next time. A tedious process, but the skills and image quality improvements I have experienced, fully compensate for all of the burden.

Automation is useful when most factors are in control. Tracking, focusing, proper acquisition of flats, proper acquisition of darks/flat darks/bias, and correct usage of them during data reduction. In short, it is useful when you know your equipment and when you know how to reduce your data.

Unfortunately what I suspect is most people settle on automatic tools and let human errors or other problems haunt their integrated images for ever, without noticing. For me PI is more of a debugger which allows me to correct my mistakes during acquisition.

12
General / Re: Adding additional frames to an integrated image
« on: 2019 November 06 02:07:01 »
Or just let ImageIntegration with no rejection to integrate your images with the proper SNR-based weights. Note that since you have two integrated and registered images, enter each image twice because ImageIntegration requires at least three source images.

13
Try aligning the channels before stretching. If you still have problems with the saturated stars and the latest StarAlignment process, please have a look here: https://pixinsight.com/forum/index.php?topic=14056

14
Hello,

First of all, what version of PI are you using? Are you trying to align stretched images? The norm is to use StarAlignment with linear images. Also I have noticed that you have some stars with flat profiles (saturated stars) and this may be another source of issues.

HTH,
dld

15
General / Re: What does RGB working space actually do?
« on: 2019 October 25 11:17:08 »
Thank you all for your answers, it seems that I have to study more but...

As far as I can understand, a properly integrated, background-corrected and color-calibrated (with PCC) linear image, obtained from properly reduced data should be observer-independent. Assume we want to perform a deconvolution operation on the luminance component of the image. The luminance component shouldn't be observer-independent i.e., obtained without making any assumptions on human vision and color perception?

How does RGBWorkingSpace coefficients enter in such operations during the linear stage? If the default coefficients are obtained from a standardized color perception model, we break the observer independence (because here enters the "average" observer). Is this desirable? Or it is unavoidable by the same definition of the term luminance, and the only way to standardize is to use the default/standardized RGBWorkingSpace settings?

If the above aren't obviously wrong, I'll dare to say that Juan probably has similar philosophical questions :P

Thank you again, especially Juan, given his limited time available!
dld

Pages: [1] 2 3 ... 9