Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - javajunkie2121

Pages: [1]
1
General / Linear Fit Question
« on: 2017 August 16 17:36:36 »
Hi:

I've seen some tutorials suggest using linear fit on the (stretched) L channel of the Master RGB to match it to the Master Luminance before LRGB combination.

I've also seen a tutorial suggest that linear fit should just be used to match the linear (nonstretched) Master-Lum and Master-R and Master-G and Master-B before linear RGB combination, but not to repeat it at the LRBCombination step as you want the brightness to reflect primarily the Master-Lum

I'm wondering what most folks do?

jeff

2
General / RGBWS Question
« on: 2017 August 16 17:30:42 »
Hi:
I've seen tutorials mention using RGBworkspace to set the R:G:B ratio if extracting the luminance from an RGB master to use either as a mask or to use linear fit to equalize the luminance portion of the RGB master with the Lum master.

Do folks do this?  If so, you set the R:G:B contribution to 1:1:1 and also need to lower the gamma setting to 1.0 and de-select the sRGB setting?

jeff

3
General / DBE: repeat vs push tolerance
« on: 2017 May 07 14:14:19 »
Hi:

I've had some tough gradients (usually RGB) that are hard to get rid of with DBE, even if I push the tolerance to >1.7. Splitting channels and doing separate DBE on each color master seemed worse.

If you desire more gradient removal..is it better to do one application of DBE and push tolerance higher (assuming you are not seeing loss of real, desired signal), or is it better to keep tolerance lower and do more than one DBE application?

jeff

4
General / Deconvolution Wavelet Regularization settings
« on: 2016 August 09 17:23:48 »
Hi:  I've seen some processing examples using deconvolution where the wavelet regularization settings are changed from default (using the Regularized Richardson-Lucy algorithm)...I can't find documentation about these settings (Gaussian vs Poisson and # of wavelets and settings per wavelet for noise threshold and noise reduction)

any advice about tweaking these settings during processing?

jeff

5
Hi All:

I've got some longer exposure frames of a dim galaxy group, which unfortunately has one magnitude 6 star right in the middle that's very very saturated.  I was wondering how folks deal with this type of situation: 

- using different degrees of stretch and combining them using a mask (i'm afraid I'll lose other faint stars)?
OR
- trying GradientHDRComposition/HDRComposition and combining images of different exposures akin to that used for M42?

or some other way to replace this super saturated star in the longer exposures?

jeff

6
General / Use of RGBWorkingSpace Process
« on: 2015 November 17 16:14:15 »
Hi All:

I'm trying to determine when I should be using the RGBWorkingSpace process with LRGB (+/- NB) image processing.

I've seen some refer to setting weights to 1:1:1 in RGBWorkingSpace before extracting the L component from RGB to LinearFit to Lum master before LRGBCombination.  I've seen other tutorials mentioning RGBWorkingSpace when combining NB with R,G,B channels.

Should I be using this process at the start of every project, and perform it once as a part of the saved project for all future processing steps? Or must this be done repetitively before certain steps?  Can this be set to be a default for all processing?

jeff

7
General / Adding Ha data to unbinned L and binned RGB
« on: 2015 November 07 08:51:59 »
Hi:

I'd like to add some Ha data to a galaxy project to get more detail..I have unbinned luminance and binned RGB.  I'm uncertain about whether to bin vs not bin the Ha shots..the situation is dark sky imaging with f7 scope and the KAF 8300 chip.  I thought folks would choose binning of the narrowband to match the binned RGB to combine, but I see some folks listing unbinned L and Ha and binned RGB?

any suggestions?

jeff

8
General / Which process for linear noise reduction
« on: 2015 February 05 18:39:55 »
Hi: 

I've had some luck with a light touch of ATWT in the past for linear state noise reduction.

I see MLT and MMT and I'm wondering what folks use for linear state noise reduction?  MLT or MMT?  Both?  I'm trying to experiment with these processes.

jeff

9
Hi:

would appreciate some advice.

I'm moving from OSC to mono LRGB imaging, and trying to learn the new workflow. I've looked at some tutorial examples..they differ a bit in details regarding the timing of registration, or number of times you register stacks, as well as when to execute DBE

1. Let's say I've got binned 1x1 luminance, but binned 2x2 R, G and B frames, so RGB in need of upsampling. 

It's been suggested in the IP4AP series that once you calibrate L-R-G-B frames to their respective flats with darks/bias etc..that you register all individual L-R-G-B together in one big batch, then separately integrate the different channels. Is this superior to separately registering and integrating the L, then R, G, and then B into their respective stacks? 

If you register all the frames together in one big batch, and then separately integrate, I presume the R-G-B were already upsampled if you used the luminance as the reference.  If the next step is to combine the R, G abd B channels, do you need to reuse Star align if you've already registered the images together?  especially if you've applied the same dynamic crop instance to L-R-G-B stacks to maintain registration?

Do you need downstream to reuse staralignment again to combine the L with the RGB composite?  or would the LRGBCombination tool maintain the registration?

2. should you do DBE on each individual R, G and B master first, and then combine the channels into the RGB composite, or should you combine the channels into a composite RGB first, and do DBE on the composite? 

jeff

10
General / BPP Not recognizing Master Flat
« on: 2014 November 04 18:40:20 »
Hi:

I have PI version 01.08.03.1115 updated as of November 4 2014.

 I had used the ImageCalibration followed by ImageIntegration processes to create a bias-reduced Master Flat.  I have been trying to use this Master Flat with the BatchPreProcessing script for my lights, have the "use master flat" clicked, have loaded it into the correct tab..but I get a warning under diagnostics that I have not selected flat frames to calibrate light frames?  This had worked before ..not sure what's going on?

jeff

11
General / use of light pollution filters with LRGB mono imaging
« on: 2014 October 12 17:40:25 »
Hi All:

I'm moving from one shot color CCD to a mono CCD with filters to do LRGB imaging..I live in a light polluted area.

Do folks living in urban settings with light pollution use a light pollution filter with LRGB imaging? 

If so, is the LP filter in front of all filters?
Supposedly the Baader filters I am getting have a gap between red and green to account for streetlights, but Kayron Mercieca has posted information on his site that it still isn't enough to eliminate light pollution from the red channel.

Or, do you all substitute the light pollution filter for the luminance filter?

jeff

12
General / Best time in processing to use SCNR
« on: 2014 October 11 12:09:09 »
Hi:

when is the optimal time to use SCNR..linear stage?  nonlinear?

I know I should be after color calibration...but I've seen mention of issues with SCNR if you've applied a lot of smoothing to the image..so if I'm using noise reduction (ATWT/MMT in linear, TGVDeNoise in nonlinear)..should SCNR precede these steps?

jeff

13
Hi: I've watched a couple tutorials trying to learn to use the BPP script...I'm confused about two issues:

1) do the hot/cold pixel removal algorithms still work well enough if you didn't dither while imaging?

2) in past with nebulosity (with an atik OSC with a sony low noise chip) I calibrated the lights with a dark master (not bias reduced) and a flat master (was bias reduced)..I didn't bother with dark flats.  In PI with the BPP script...I see folks create the dark master (no bias reduction) and flat master (with bias reduction, ? and maybe using dark master as well??) using imageIntegration, and then they are listing master dark, master flat and master bias to use with the lights in the BPP script

...I'm not sure what is preventing the script from subtracting both dark (that has bias) and bias from the lights during calibration?  Also one of the tutorials mentioned that you could use the dark master as well as the bias master to calibrate the flats while making a flat master...but I created a dark frame library at a separate time than the imaging session..so the darks match the lights for exposure and temperature..but they are not necessarily at the same orientation and focus of camera in OTA etc.  Not sure if that matters given the scope cover is on...but, is it valid to use a master dark from a separate session for calibrating flats, or do most folks just calibrate the flats with the bias frames?

jeff

14
General / Help with approach to noise reduction
« on: 2014 July 03 17:53:10 »
Hi All:

I've been watching tutorials and trying out various noise reduction processes that are available (ATrousWT, TGVDEnoise, ACDNR, Multiscale linear transform, SCNR) but as a beginner I'm having trouble establishing a workflow with these...linear vs. nonlinear state, single application, multiple applications etc.

while I'm always trying to gather more and better data to improve SNR, the reality for me in light polluted bay area with my OSC CCD is a lot of noise in background in images from stacked subs that are often less than 5 min or so..I aim for at least an hour's worth of data but I do not always achieve that given weather and time limitation and satellite trails etc...

I am struggling to get backgrounds to not look blotchy or too black (be it a galaxy or a planetary nebula or a glob cluster).

I'd appreciate some advice on which noise reduction process(es) and where in the workflow is best for beginners to try and use?   assume that I've started with cropped OSC RGB image and have already gotten through DBE, Background Neutralization, Color Calibration

jeff

15
Hi:

was watching the IPAP tutorial on PixInsight deconvolution...I have OSC images...do I need to manually extract a pseudo-luminance component out of the OSC single image to use deconvolution and other processes that primarily effect luminance? Or do I stay with the single image and the process works on the luminance component?

jeff

Pages: [1]