Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - jkmorse

Pages: 1 ... 3 4 [5] 6 7 ... 61
Gallery / Tadpoles in Space!!
« on: 2016 December 05 06:15:04 »
First NB test image with my new rig:

Note that what I quote is from my workbook, but was written by Juan in an older post.  He gets the credit for the technique.



Best way to create a synlum is using the Image Integration tool.  Here is the protocol I have in my workbook (freely available if you are interested as I share it with about 200 people around the world.  Just drop me a line at if you are interested).

ii.   Create Synthetic Lum:

In general, the unweighted average of individual RGB components does not lead to an optimal result in signal-to-noise ratio terms. An optimal synthetic luminance must assign different weights to each color component, based on scaled noise estimates.

This process can be done very easily with the ImageIntegration tool. Open ImageIntegration and select the RGB files. Use one of the RGB channels as the reference for integration.
Then leave all tool parameters by default (you can click the Reset button to make sure) and click the Apply Global button. The relevant parameters are as follows:

- Combination = Average
- Normalization = additive with scaling
- Weights = Noise evaluation
- Scale estimator = iterative k-sigma
- Generate integrated image = enabled
- Evaluate noise = enabled
- Pixel rejection = No rejection
- Clip low range = disabled
- Clip high range = disabled

You can make several tests with different scale estimators and select the one that yields the highest noise reduction. The integration result is the optimal luminance image that you can treat in the usual way (deconvolve if appropriate, stretch to match the implicit RGB luminance, combine with RGB using LRGBCombination, etc).

Hope that helps,



Funny you should say that about TSX.  I had to go in and create a whole set of cheat sheets for use in the field just to keep track of the nuggets buried in TSX's 600+ page manual.  Tons of information but it takes several readings to begin to find all the nuggets and put them to use. 



I also want to address one more point that is at the heart of this and at least one other thread.  It is misleading to say that the vast majority of processes have no documentation.  That is simply not true.  I went back and did some checking last night, looking at a couple of dozen processes, including all of the multiscale processes.  I will concede that if you click on the little documentation button that there is nothing there.  But that ignores the fact that for each of those tools virtually every item that you can manipulate, whether it be a slider, button, or otherwise, is annotated, and sometimes heavily so.  All you have to do is hover your mouse over the descriptor for that input item and you get a wealth of detail on what that input does and often recommendations on how to use it and detailed explanations of what it does.  Its a disservice to PI to suggest they haven't made the effort to make the tools accessible.



Let me jump in here again and try to explain why I don't see any of this as the huge issue you do.  The print function is a prime example of what I love about PixInsight.  It is a work in progress and while it is rough at the moment, given Juan's track record, when he refines it, it will work and work extremely well.  The Wavelet based processes are a perfect example.  We started a ways back with things like ATrous' wavelet processes and others of a similar vintage with have now been supplanted by MLT, MMT, TGVD, etc, etc., all markedly improved from the original iterations.  Juan is providing those kinds of step wise improvements on a regular basis.  And all of that comes with ONE, and I repeat, ONE license payment for the life of your use (though I personally believe Juan should be asking more from long time users).  And he does not have a huge team.  This is basically one man's labor of love. 

Compare that to what you pay and what you get documentation wise from the giants, the likes of Adobe.  You now have to pay an arm and a leg up front with no update rights or sign up for life with annual payments.  And none of those products are really usable without spending extra on third party guides, missing manuals, etc., etc.  I have past experience with PS and much more recent experience with Dreamweaver, which I happily use to run my website since it is excellent software, but without costly third party material, the task would be hopeless.  And Adobe has a team that I guess is in the hundreds, if not the thousands, to support their products.  Juan is basically a one man shop, with help from the likes of the PixInsight Coffee folks, the superb script writers, and the lesser contributors like myself who do what we can to contribute since we get so, so much in return.

You yourself clearly recognize the worth of PixInsight, otherwise you would take up Juan's kind offer of a refund.  All I am asking is that you put aside your vitriol and stop bitching and moaning about what PI is lacking and focus on the issue at hand.  If you have questions, by all means come here for answers.  This is the liveliest product forum site I have had the pleasure to participate in.  The knowledge base is enormous, and Juan regularly contributes his efforts as well.  But please let up with the tiresome rants.  And if you can't, then you are welcome to forgo the benefits of PI, take the refund, and try to find anything remotely comparable for anywhere near the price.




General / Re: The Need for a Large Number of DSLR BIAS Subs
« on: 2016 November 30 11:50:55 »

Again, good analysis and it gives me a lot of comfort.  My 6303E has a full well capacity of 100k, I expose about 40% and shoot five sky flats which puts me around 200,000 e- which should be plenty to get me where I want.



General / Re: The Need for a Large Number of DSLR BIAS Subs
« on: 2016 November 30 06:02:43 »

I like the 4x guideline.  It makes a lot of sense but that still puts folks in the hundreds of cal subs if they are shooting the typical 30+ lights. 

I would be curious to get your thoughts on Flats, however, since those are definitely limited by available sky for those of us shooting sky flats.  I have excellent results from only 5 flat subs to build my masters.




Once you get the system you want, when setting up check back about setting up your preferences re swap files.  I have found a ramdisk to really speed things up, and then I also point to several swap locations on SSDs (64Gb locations should be fine, though I use 128Gb folders).  Check out this quote from Juan as a starter:

New Parallel Swap File Storage

To maximize availability of RAM for processing tasks, the processing history management and masking systems implemented in PixInsight are based on temporary disk swap files. Basically, a swap file is required at each processing step to store the previous image state, so you can undo/redo actions, carry out masking operations, and travel the processing histories of images arbitrarily.

When working with very large images, swap file access can be the most important bottleneck that compromises performance of the entire PixInsight platform. This is particularly relevant on 64-bit systems, where there is no practical limit to image sizes, which opens the door to really huge mosaics and high dynamic range stacks. Note that we are talking of disk swap files in the multi-gigabyte range.

Starting from version 1.4, PixInsight Standard uses parallel disk I/O operations to generate and maintain temporary swap disk files. When two or more *physical* disk drives are available, PixInsight can be configured to spread swap files on a set of physical disks (no specific limit), and read/write them through parallel threads executed concurrently.

The performance gain that can be achieved thanks to parallel disk I/O in PixInsight can be spectacular. For example, with just two Serial ATA 300 disks (not particularly fast drives), PixInsight can easily achieve data transfer rates above 500 and 140 MB/s, respectively for swap read and write operations. This allows working with very large images in PixInsight. For example, with four fast drives configured for parallel swap file storage, you can work with a 32-bit RGB image of 12000×12000 pixels and perform undo/redo operations almost in real time. Note that parallel disk access is even faster —and much more flexible, easier to configure and implement— than RAID 0 storage.

To use parallel swap file access with PixInsight, you need two or more independent, physical disk drives. Do not try to enable this feature using several directories or disk partitions on the same drive, since multiple parallel write operations performed on a single hard disk may be dangerous to the integrity of the drive.

To enable parallel swap file access, select the Edit > Global Preferences main menu option. On the left panel of the Preferences interface, select the Directories and Network item. You can specify a list of folders for swap file storage. However, as we have said, only specify folders on independent physical disk drives.

Note that for a Ramdisk, you can point to the same location several times (I point to mine 4 times before pointing to the swap locations on the separate SSDs on my system).



General / Re: The Need for a Large Number of DSLR BIAS Subs
« on: 2016 November 29 11:40:10 »
I like lots of subs for any camera or CCD.  I typically shoot 100 for a very clean 6303E chip.  The key is that you only need to shoot new bias frames once every few months, not with each image set.  I tend to shoot a new dark and bias library every quarter.  Since you are only shooting bias frames every few months and they take very little time to shoot, why not go for more rather than less?

For what it's worth,


General / Re: law of diminishing returns
« on: 2016 November 22 09:26:45 »
Checked out his site.  WOW!

Wish List / Re: DynamicCrop - Preserve Aspect Ratio
« on: 2016 November 22 09:07:05 »
+1  I do it manually with a calculator but this would be a nice add.


General / Re: law of diminishing returns
« on: 2016 November 22 08:50:53 »
Mesmerizing discussion guys!  I really wish we would get into more of these as a community of PI users.  Its great to help folks get over the start up hurdles, but we have a mass of experienced talent represented in just this post, not to mention the community at large, that we can actually make a difference in helping the new folks go to the next level of analysis.  Now for my two cents, for what little they are worth:

About the only "scientific truth" we know is the basic square root rule, namely you "only" improve SNR at the square root of the number of images taken.  Thus, I double my bang, going from 1 sub to 4 for the "cost" of only 3 extra subs.  I like to shoot 36 subs (sorry Warhen  ::) ) since it works out nicely that I am getting 6x improvement in SNR and going from 5x to 6x "only" cost me 11 subs for a 20% gain.  But the next jump takes 13 for 16.7%, then 15 for 14%, etc., etc. (please feel free to correct my numbers here as I am a liberal arts guy who dabbles in math as a hobby, not a career  :o ).   

So far so good.  Its a "simple" and straight forward analysis.  That is until you add in everything else we have been talking about so far and some we haven't, including the quality of your sky, the number of clear nights available to you, your imaging goals, the dynamics of your CCD, the focal ratio and aperture of your scope, whether you have an anti-blooming gate (some of us have sacrificed that for higher quantum efficiency), and, most importantly, the target you are chasing.   

For me and my skies (some of the best skies 7300 ft up a mountain in NM has to offer) and set up, I am firmly with Warhen that beyond a point (pick a point, any point), you are wasting your time chasing ephemeral gains by stretching out the number of subs you shoot, UNLESS you are striving for a truly difficult and faint target.  But for most targets, your restrictions are based on the resolution of your scope which comes down to aperture.  At some point, you have captured all the meaningful data you can use and its time to move on to the next image.

I most heartily welcome all the brickbats you can throw at me.  That's how we all learn.

THANKS for a great discussion!!



General / Re: Integrating Multiple Nights Images
« on: 2016 November 10 07:35:58 »
See, that's the difference between an engineer and a lawyer.  You go all high tech and I go to Lowes. ;D   Never went out to observe without at least two full rolls of Gorilla Tape in my kit.  That stuff is amazing (invented, no doubt, by an aerospace engineer).

General / Re: Integrating Multiple Nights Images
« on: 2016 November 08 12:14:11 »
I assume you are breaking your system down, then reassembling each night, otherwise you wouldn't be having this issue.  The expensive solution is a rotator, but that is overkill in this case.  The sky doesn't change enough from night to night to make a difference if your properly aligned from night to night (I routinely shoot 30+ hours of subs that can take weeks to complete depending on clouds and moon).  What you need to do is make sure you are disciplined in how you set up each night so that the image train is as close as possible to what you used the previous nights.  Even just adding bits of duct tape to mark how your scope and camera are aligned will make the task so much easier.  Also, try to set up your mount in roughly the same orientation.  Again, before moving to a permanent observatory, I would leave marks on my set up location to make sure I could get to approximately the same position.  Then your areas that don't overlap will be minimized and easily cropped.  What doesn't work well is having subs at all angles.  Then your stacks will show a mess and only the small center where everything overlaps is workable.

Hope that helps,


Pages: 1 ... 3 4 [5] 6 7 ... 61