Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Juan Conejero

Pages: [1] 2 3 ... 428
Here is the thread about thermal dissipation issues on MacBook 2018 laptops. Also another similar report.

Is this problem repeatable, or is it sporadic?

General / Re: ImageIntegration Reference Frame Anomaly
« on: 2019 July 11 07:20:02 »
Hi John,

I think we have identified the bug that is causing this anomaly. It can only happen when the integrated image has very high pixel values, 1.0 or very close. In practice, this only happens when the data set contains saturated pixels at the same location in all of the frames. Such pixels cannot be rejected. When this happens and the reference image leads to a scaling factor greater than one, the output image is rescaled automatically. For integration of lights, this rescaling operation is the correct choice to preserve all of the integrated data, but it is not for flats, since any rescaling operation invalidates the illumination profile.

We haven't seen this problem before because ImageIntegration is intended to be used with well calibrated data, including calibrated flat frames, where there should be no saturated hot pixels. The solution to prevent this issue is to replace the rescaling operation with a truncation to the [0,1] range only for integration of flat frames. This will be implemented as an additional parameter of the ImageIntegration process. In addition, the tool will issue a warning message when the output image is rescaled or truncated, so you can know exactly when this happens.

Thank you for discovering this problem. I still have to make more tests to implement these modifications. Once completed, I'll release an update with a new version of the ImageIntegration module.

So for now, there is no need to upload new data if you prefer. I'll let you know if I need them.

General / Re: ImageIntegration Reference Frame Anomaly
« on: 2019 July 11 02:54:13 »

Can you please upload the master bias and dark frames for these flats, or even better, the individual bias and dark frames, so we can generate the masters? We think the problem is the additive components in the uncalibrated flat frames. With non-negligible additive components in the individual flat frames, generation of a valid master flat is impossible.

General / Re: ImageIntegration Reference Frame Anomaly
« on: 2019 July 11 02:09:30 »

Yes, it seems we have a problem here, and it is very strange. I haven't seen anything like this under normal conditions before.

Although both integrated results (with 0001 and 0003 as integration references) are valid models of the same illumination distribution numerically (to prove this, enable the Rescale result parameter when you apply your PixelMath expression, to prevent truncation of values > 1), the result with 0001 as reference does not work for image calibration. Let me some time to analyze what is happening here. Either there is something anomalous in the data, or we have a bug. In the latter case, it would be of the maximum priority. At any rate, thank you so much for alerting us about this.

General / Re: ImageIntegration Reference Frame Anomaly
« on: 2019 July 10 04:49:21 »
Hi John,

Thank you for uploading the images. As expected, there is no problem at all, neither with the flat frames (other than the fact that they are uncalibrated frames, but that's a different matter), nor with the integrated results, irrespective of the frame selected as reference for integration.

Multiplicative normalization is extremely sensitive to small differences among reference images, so you can also expect relatively large differences in the corresponding integrated images. However, all of them are equally valid and there is no truncation of pixel values.

To demonstrate this, let's count the number of pixels with a value of zero in the integrated result using different reference frames. This can be done very easily with the PixelMath tool. Define a PixelMath instance with the following parameters (the rest with default values):

- RGB/K Expression:

      n += $T <= 1.19e-07

- Symbols:

      n = global(+)

- Generate output (Destination section): disabled.

To perform a more rigorous test, we'll count the number of pixels with values less than or equal to the machine epsilon for 32-bit floating point (1.19x10-7), instead of zero. After executing the PixelMath instance on each image, the results are written to the console:

Code: [Select]
PixelMath: Processing view: integration_with_reference_0003
Executing PixelMath expression: combined RGB/K channels: n += $T <= 1.19e-07: done

* Global variables:
n(+) = { 0, 0, 0 }

35.835 ms

PixelMath: Processing view: integration_with_reference_0001
Executing PixelMath expression: combined RGB/K channels: n += $T <= 1.19e-07: done

* Global variables:
n(+) = { 1, 1, 1 }

42.657 ms

As you can see, the image integrated with the first frame as reference has a single pixel with an insignificant value. This has no statistical significance, and is irrelevant for all practical purposes.

Let's perform a second test. Define the following PixelMath instance (other parameters with default values):

- RGB/K Expression:


- Rescale result (Destination section): enabled.

This instance will normalize the master flat frames, basically in the same way ImageCalibration does before applying a master flat to calibrate light frames. After applying the instance, these are the normalized/unclipped statistics:

count (%)   100.00000
count (px)  11694368
mean        1.034921e-01
median      1.044943e-01
avgDev      2.986219e-02
MAD         2.971942e-02
minimum     0.000000e+00
maximum     1.000000e+00

count (%)   100.00000
count (px)  11694368
mean        1.034922e-01
median      1.044944e-01
avgDev      2.986219e-02
MAD         2.971948e-02
minimum     0.000000e+00
maximum     1.000000e+00

The differences are insignificant (beyond 32-bit floating point resolution). You can also compare the histograms for both integrated frames to verify that they are indeed identical.

You can be sure that, irrespective of the reference frame selected for integration, the resulting master flat will work perfectly for image calibration. If the flat frames are valid, that is.

General / Re: no preprocessing...
« on: 2019 July 10 03:42:36 »
We cannot reproduce this problem. Can you please upload a data set where this error can be reproduced, so we can analyze it? Thank you in advance.

Hi Jack,

The FITS format is deprecated in PixInsight. We continue supporting it exclusively for compatibility with existing data, and to share data with other applications. You should always use XISF, which is the only format able to provide all of the required metadata and features to work with PixInsight in optimal conditions.

Hi Greg,

This is, with high probability, a machine-specific issue with your Windows operating system, not with PixInsight. To help you, we need to know exactly what 'module' could not be found, or a more specific error message.

We cannot help if you don't upload a minimal data set where the problem can be reproduced.

PCL and PJSR Development / Re: Is StarAlignment Opensource?
« on: 2019 July 09 09:49:28 »
Hi Don,

Interesting project. I assume this is a PixInsight module. Can you please elaborate a bit more about this? Will this tool (or toolset) be publicly available?

StarAlignment has not been released as an open-source product (for several strategic reasons, but recently because we are working on a completely new distortion modeling algorithm, which I don't want to publicize for now). However, if you are writing a public module, and especially if you are going to release it as an open-source product, we can assist you by sharing code from SA.

As you probably know we are also working on an integration of the INDIGO platform with PixInsight. A real-time integration tool would work nicely with the new image acquisition capabilities that we'll implement.

General / Re: ImageIntegration Reference Frame Anomaly
« on: 2019 July 09 05:38:59 »
Hi John,

What you are observing here is the result of output normalization (multiplicative in this case). If you disable normalization, you should get identical results irrespective of the reference frame you choose. For master flat generation, output values after integration are irrelevant because the master flat is always normalized for calibration.

However, the numbers you are showing here are indeed weird. The minimum pixel value should not be zero under normal conditions. So something strange happens with these flat frames. Can you please upload the entire set (dropbox, etc)?

General / Re: Processing LRGB and NIR
« on: 2019 July 09 05:22:37 »
Vicent and I wrote a tutorial where we describe how we generated RGB images with Alhambra Survey data. We combined data acquired with 23 filters from 3500 Å to 9700 Å and the standard JHK NIR bands. We defined synthetic RGB filter sets by assigning different weights to each original filter band. This is a completely different way to face this problem, but I think you may find it interesting and hopefully applicable to your data.

Hi Mike,

None of these problems are reproducible on any of our machines, on any supported platform (FreeBSD, Linux. macOS, Windows 10). Nothing essential has changed in the core application between versions (January 23) and (May 14). Both versions are bugfix/update releases for the current 1.8.6 version of PixInsight, and I cannot figure out any reason for such a drastic difference between them.

The version downloaded for the trial worked fine. I uninstalled from Windows control panel, installed the new version, and the screen came up OK, but everything inside the workspace was not "calibrated" for the mouse. If I loaded process icons to the desktop, I had to click underneath them (not on them) to select it. The main menu was not present, but if I clicked below where it was supposed to be, it activated.

AFAIK, these weird problems with the mouse have not been reported before. Needless to say, they are very strange and cannot be reproduced under normal working conditions.

As a last ditch effort, I decided to open PixInsight by double clicking on an XISF file on my hard drive. Everything opens up fine. If I load my process icons, they behave properly, and the main menu is present. If I close PixInsight and open by double clicking the shortcut on my desktop, the white screen syndrome returns.

This behavior seems to indicate it is not a computer configuration problem, but maybe an initialization problem when the software starts up. On my system, this is very repeatable behavior.

Nothing is different when you launch the PixInsight core application with or without loading an image at startup, other than the obvious fact that a file is scheduled for loading and then loaded when the application enters its main event loop. I can show you basically the entire source code that runs in both cases to demonstrate what I am saying here.

I understand your frustration with these issues, but please understand that I cannot fix something that I cannot reproduce on our working and testing machines (which include 5 different installations of Windows 10 as of writing this; we are currently restructuring our development infrastructure).

My advice is a complete reinstall of your operating system, trying to configure a clean system with a minimal software configuration and the latest hardware drivers available, with a proven good virus protection software, without any bloatware, using a Windows installation image downloaded directly from Microsoft servers. This is what we do on our Windows machines, and they work fine (considering it's Windows, after all).

General / Re: Removing Noise from an Image
« on: 2019 July 04 02:10:25 »
As a general rule, comparisons of unscaled noise estimates are meaningless. To compare noise standard deviations, you must scale them with statistical scale (or dispersion) estimates. In this way you'll be comparing compatible statistical descriptors.

You can scale the noise estimates easily using console commands in PixInsight. However, to simplify these operations, here is a modified version of the NoiseEvaluation script (which we call ScaledNoiseEvaluation) that you should use to compare noise estimates calculated for different images:

Code: [Select]
 * Estimation of the standard deviation of the noise, assuming a Gaussian
 * noise distribution.
 * - Use MRS noise evaluation when the algorithm converges for 4 >= J >= 2
 * - Use k-sigma noise evaluation when either MRS doesn't converge or the
 *   length of the noise pixels set is below a 1% of the image area.
 * - Automatically iterate to find the highest layer where noise can be
 *   successfully evaluated, in the [1,3] range.
 * Returned noise estimates are scaled by the Sn robust scale estimator of
 * Rousseeuw and Croux.
function ScaledNoiseEvaluation( image )
   let scale = image.Sn();
   if ( 1 + scale == 1 )
      throw Error( "Zero or insignificant data." );
   let a, n = 4, m = 0.01*image.selectedRect.area;
   for ( ;; )
      a = image.noiseMRS( n );
      if ( a[1] >= m )
      if ( --n == 1 )
         console.writeln( "<end><cbr>** Warning: No convergence in MRS noise evaluation routine - using k-sigma noise estimate." );
         a = image.noiseKSigma();
   this.sigma = a[0]/scale; // estimated scaled stddev of Gaussian noise
   this.count = a[1]; // number of pixels in the noisy pixels set
   this.layers = n;   // number of layers used for noise evaluation

function main()
   let window = ImageWindow.activeWindow;
   if ( window.isNull )
      throw new Error( "No active image" );;
   console.writeln( "<end><cbr><br><b>" + window.currentView.fullId + "</b>" );
   console.writeln( "Calculating scaled noise standard deviation..." );

   console.abortEnabled = true;

   let image = window.currentView.image;
   console.writeln( "<end><cbr><br>Ch |   noise   |  count(%) | layers |" );
   console.writeln(               "---+-----------+-----------+--------+" );
   for ( let c = 0; c < image.numberOfChannels; ++c )
      image.selectedChannel = c;
      let E = new ScaledNoiseEvaluation( image );
      console.writeln( format( "%2d | <b>%.3e</b> |  %6.2f   |    %d   |", c, E.sigma, 100*E.count/image.selectedRect.area, E.layers ) );
   console.writeln(               "---+-----------+-----------+--------+" );


Bug Reports / Re: SubframeSelector graphs stuck on old data
« on: 2019 July 04 01:14:50 »
We cannot reproduce this issue on any supported platform. Can you provide a minimal project where this can be reproduced?

Pages: [1] 2 3 ... 428