Author Topic: New Image Processing Question  (Read 4363 times)

Offline sreilly

  • PixInsight Padawan
  • ****
  • Posts: 791
    • Imaging at Dogwood Ridge Observatory
New Image Processing Question
« on: 2010 August 08 08:59:57 »
As is usual for me, after I get about 3 twenty minute sub-frames I start to "see" how the image is developing. Last night was no different and although I still am calibrating the images in MaxIm because I De-Bloom there using Ron Wodaski's De-Bloomer plug-in and and also removing hot/dead pixels, I tried to do the rest in PI. What I did was star alignment, image integration using Median because Average left behind a lot of gamma ray hits and missed hot pixels, LRGB combine, dynamic crop, DBE, and then HST and Curves.

Now I was curious, what tool or process is used to remove hot/dead pixels? Also, using curves and levels in PS I was always taught to do a curves adjustment and then a levels adjustment to just raise the black point followed by another curves and so on till I got the result I wanted. Is this typical of these two tools in PI and if so, can both be open at the same time and simply switch between the two? Would they share the same preview window or separate ones?

Thanks,
Steve
www.astral-imaging.com
AP1200
OGS 12.5" RC
Tak FSQ-106ED
ST10XME/CFW8/AO8
STL-11000M/FW8/AO-L
Pyxis 3" Rotator
Baader LRGBHa Filters
PixInsight/MaxIm/ACP/Registar/Mira AP/PS CS5

Offline Jack Harvey

  • PTeam Member
  • PixInsight Padawan
  • ****
  • Posts: 975
    • PegasusAstronomy.com & Starshadows.com
Re: New Image Processing Question
« Reply #1 on: 2010 August 08 11:07:03 »
OK  Easy part first.  Yeah you can have Histo and Curves open at the same time and use them back and forth.  Highlight to the tool you wish to use, be sure you have enabled the real time preview and you will see your real time changes.

I have found that the pixel rejection in Image Integration works well to get rid of both hot pixels and cosmic ray hits.  It will take a small session for you to get your setting correct.  I use Windsorized rejection and enable the settings for the clipping masks as well as clip high and low pixels.  Others like the Sigma Reject settings so they can adjust the % clipping.  The clipping masks can be evaluated using STF or Histo to see what you have clipped and should allow you to set your settings so you are not clipping good data.  HTH
Jack Harvey, PTeam Member
Team Leader, SSRO/PROMPT Imaging Team, CTIO

Offline Carlos Milovic

  • PTeam Member
  • PixInsight Jedi Master
  • ******
  • Posts: 2172
  • Join the dark side... we have cookies
    • http://www.astrophoto.cl
Re: New Image Processing Question
« Reply #2 on: 2010 August 09 14:43:58 »
Hi Steve

Quote
Now I was curious, what tool or process is used to remove hot/dead pixels?

You may use the DefectMap process that is part of the new release. Basically, you have to manipulate your image (or calibration frames, like the master dark) to create a "defect map" image, where bad pixels are signalled by a 0 value, and good pixels by 1. Then, you use this map to replace the bad pixels with different operations in that process that will exclude any bad pixels in the neightbourhood, thus yielding better results than a naive convolution (or morphological filter) and a mask.


Quote
Also, using curves and levels in PS I was always taught to do a curves adjustment and then a levels adjustment to just raise the black point followed by another curves and so on till I got the result I wanted. Is this typical of these two tools in PI and if so, can both be open at the same time and simply switch between the two? Would they share the same preview window or separate ones?

In my experience, it is better to set the black and white points as early as you can, and then leave them fixed (unless you apply some heavy noise reduction that will create unused dynamic range, specially on the dark side). Then, go to curves.
Having said that, in a side note, there a few "tricks" or "tips" that were common on PS, mainly to avoid gasping and posterization of images, due the poor bit deph of the working image (8 or 16 bits). If you use 32bits images, there is no need for such things. You may apply a very aggressive middtones transfer function (for example) to increase brightness, and then do the opposite, and there will be a minimal deterioration due to rounding errors. This is part of the new paradigm that PixInsight has been trying to incorporate into our minds since a long time ago <vbg> and one of the reasons I say that we don't need processing layers ;)
Now into a more direct answer: yes, you may open both processes at the same time. In fact, you may open any number of processes, modify them, create icons, without a single image opened. Processes and images are living objects, they interact but do not need each other. This is the object oriented paradigm.
If you want to preview the results "live", with a RTP, make sure that the owner of the RTP is the process that you are modifying (take a look at the icon in the bottom right corner of the RTP window; to change ownership, activate the RTP buttom in the appropiate process window).
Regards,

Carlos Milovic F.
--------------------------------
PixInsight Project Developer
http://www.pixinsight.com

Offline Niall Saunders

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1456
  • We have cookies? Where ?
Re: New Image Processing Question
« Reply #3 on: 2010 August 09 15:01:12 »
Hi Steve,

Quote
Also, using curves and levels in PS I was always taught to do a curves adjustment and then a levels adjustment to just raise the black point followed by another curves and so on till I got the result I wanted

It's not really necessary to do the PS 'back-and-forth dance' when working in PI. If you really need to use Curves (and, quite often, you do NOT need to - HDRWT will achieve 'more', for 'less') then just use Curves (as much as you need to, with as many 'tweaks' as you think your image needs, and then follow up with a Histo to sort out the Black point (noting that PixInsight will tell you EXACTLY how many pixels you are 'clipping' off the bottom end (assuming you are willing to sacrifice ANY)

PI is just 'more powerful' than PI - and MANY of the 'fudge-arounds' that are needed in PS 'can be' continued in PI, but you soon realise that they are just not needed, and you adjust your processing methods to suit (and usually never look back, except - sometimes - to wonder "why?" !)

Because PI works, natively, in (at least) a 32-bit Floating-Point environment (compared to an 8-, or 16-bit Integer environment in PS) you can make ONE bold Curves adjustment, followed by ONE bold Histo adjustment - and achieve the same as several iterations of 'Curves-Histo' in PS - and those 'bold' adjustments can have the precision of a scalpel simply because you can expand the 'adjustment zoom' of the control sliders in PI - which you CANNOT do in PS.

Don't worry though - it took me almost a year to make the full transition away from the 'textbook' procedures provided for PS, a year in which I grew to appreciate just how MUCH control I had in PI.

HTH

Cheers,
Cheers,
Niall Saunders
Clinterty Observatories
Aberdeen, UK

Altair Astro GSO 10" f/8 Ritchey Chrétien CF OTA on EQ8 mount with homebrew 3D Balance and Pier
Moonfish ED80 APO & Celestron Omni XLT 120
QHY10 CCD & QHY5L-II Colour
9mm TS-OAG and Meade DSI-IIC

Offline sreilly

  • PixInsight Padawan
  • ****
  • Posts: 791
    • Imaging at Dogwood Ridge Observatory
Re: New Image Processing Question
« Reply #4 on: 2010 August 09 15:06:08 »
Seems my apparent issue is old habits from "other" softwares working at a lower bit depth. I'm curious about the HDRWavelet doing more for less than curves adjustment. Can you expand or suggest where in the processing order this might take place?

Thanks,
Steve
www.astral-imaging.com
AP1200
OGS 12.5" RC
Tak FSQ-106ED
ST10XME/CFW8/AO8
STL-11000M/FW8/AO-L
Pyxis 3" Rotator
Baader LRGBHa Filters
PixInsight/MaxIm/ACP/Registar/Mira AP/PS CS5

Offline Nocturnal

  • PixInsight Jedi Council Member
  • *******
  • Posts: 2727
    • http://www.carpephoton.com
Re: New Image Processing Question
« Reply #5 on: 2010 August 09 15:10:41 »
Hi Steve,

I typically do HDRWT after the non-linear stretch and ACDNR steps but before saturation curves.
Best,

    Sander
---
Edge HD 1100
QHY-8 for imaging, IMG0H mono for guiding, video cameras for occulations
ASI224, QHY5L-IIc
HyperStar3
WO-M110ED+FR-III/TRF-2008
Takahashi EM-400
PIxInsight, DeepSkyStacker, PHD, Nebulosity

Offline Niall Saunders

  • PTeam Member
  • PixInsight Jedi Knight
  • *****
  • Posts: 1456
  • We have cookies? Where ?
Re: New Image Processing Question
« Reply #6 on: 2010 August 09 15:21:03 »
Hi Steve,

The one thing that 'helps' HDRWT is to have moved the basic image data OUT of a 'linear range' - in other words you would need to have applied at least a 'moderate' mid-tone transfer function to get the image data spread across the 0.0-1.0 dynamic range.

Usually, by then, I will have already combined into RGB (as necessary), I will also have cropped the crud off the image edges, and I will have run one, or more, iterations of DBE. I may also have run ACDNR 'first' (or at least 'prior to') HDRWT simply because doing things in 'that' order has 'worked for me' in the past.

I very rarely apply Curves now - certainly not at the early stages of processing, and certainly not for the same reasons as you are (and I was) probably used to in PS. I am mostly using Curves to help balance out gross colour imbalance, or to tweak overall image saturation. I do occasionally try using Curves for local contrast enhancement - but often reject the result because HDRWT has actually done as good a job as the image can tolerate anyway.

Then, as I have said so often in the past, I bin the whole damn lot, and wish that I had clearer skies / more data / more pixels / a cooler camera / no job to have to go to in the morning / etc., etc.  ::)

Cheers,
Cheers,
Niall Saunders
Clinterty Observatories
Aberdeen, UK

Altair Astro GSO 10" f/8 Ritchey Chrétien CF OTA on EQ8 mount with homebrew 3D Balance and Pier
Moonfish ED80 APO & Celestron Omni XLT 120
QHY10 CCD & QHY5L-II Colour
9mm TS-OAG and Meade DSI-IIC

Offline sreilly

  • PixInsight Padawan
  • ****
  • Posts: 791
    • Imaging at Dogwood Ridge Observatory
Re: New Image Processing Question
« Reply #7 on: 2010 August 09 15:47:07 »
So just to put this in perspective, what is considered a typical processing routine and do you typically run HDRWT on both nebula and galaxy images?
Steve
www.astral-imaging.com
AP1200
OGS 12.5" RC
Tak FSQ-106ED
ST10XME/CFW8/AO8
STL-11000M/FW8/AO-L
Pyxis 3" Rotator
Baader LRGBHa Filters
PixInsight/MaxIm/ACP/Registar/Mira AP/PS CS5

Offline Nocturnal

  • PixInsight Jedi Council Member
  • *******
  • Posts: 2727
    • http://www.carpephoton.com
Re: New Image Processing Question
« Reply #8 on: 2010 August 09 15:52:09 »
For me:

- DBE
- color calibration
- histogram stretch
- ACDNR
- hist. to reclaim dark range
- SCNR
- HDRWT
- L masked saturation curve

I generally don't run HDRWT on nebulae as they don't always have that much DR.

Of course this is all open for discussion and I often deviate as needed. I described most of this in my 3 astro photo insight articles.
Best,

    Sander
---
Edge HD 1100
QHY-8 for imaging, IMG0H mono for guiding, video cameras for occulations
ASI224, QHY5L-IIc
HyperStar3
WO-M110ED+FR-III/TRF-2008
Takahashi EM-400
PIxInsight, DeepSkyStacker, PHD, Nebulosity