PixInsight Forum (historical)
PixInsight => Gallery => Topic started by: rodmichael on 2017 July 09 12:52:07
-
I am posting my first attempt at Astrophotography and photo processing with PixInsight 1.8 (Ripley). This represents only my first stretch with the Histogram Transformation tool (that's how far I am in the Tutorial #2, PI-6). All pre-processing and post-processing has been accomplished with PI. In spite of the rigor of the software, it is really a great piece of work. Thanks to the PI team. They truly have to be imagers to have developed such a useful tool.
I am eager for critical assessment of this first effort by all you experts. I began capturing light fames of various DSOs in March this year. Since that time I have been trouble-shooting every aspect of equipment and software for capture of appropriate frames. The learning curve has been quite a bit steeper than I had imagined.
I began capturing light from NGC 6960 in mid-June, actually the Eastern Veil (NGC 6992-3) first. I am in the midst of post-processing those frames as well and I am starting to capture Pickering's Triangle to complete the Veil nebula for now.
-
That is truly a very nice image - for a beginner just starting out in the world of astroimaging. Sure, there are several steps that could be taken to improve the image, but the unprocessed data suggests that you have certainly worked hard to get this far,
Before anyone starts piling on advice, perhaps you could let us know what your raw image set consisted of, what your equipment was, which processes you have implemented thus far, and in which order (including your pre-processing, image calibration, steps).
Keep up the good work!
-
Thanks for the very nice comments. Yours is indeed high praise from a recognizable name in this activity.
I am glad to provide information:
Equipment:
Astrograph: Celestron RASA, 11", f2.22
Imaging camera: QSI 683WS-5
Filters: Astrodon SHO 5nm
Mount: Celestron CGX
Guide Camera: SX Lodestar X2
Guidescope: Orion Shorttube 80mm
Guide software: Stark Labs PHD2
Acquisition Software: SGP
Pre-processing: PI using pre-processing batch script with default settings
Post-Processing: PI including script for multi-channel synthesis using the SHO-AIP process (SHO = RGB), dynamic crop, fast-rotation, ABE modelization, background neutralization, color calibration, and histogram transformation (for HT used auto STF and then fine tuned). That's where I am in the tutorial. I plan on starting PI-7 in Tutorial #2 to continue fine-tuning the image.
Exposure information:
Total exposure: 6.33h
Frames: Total =38 x 600s each (about 7 different nights)
Light Frames: SII=12; Ha=13; OIII=13
Dark Frames: 36 x 600s each
Bias Frames: 35
Flat Frames: 75 (25 each filter) using Aurora flatfield device
Camera Temperature: -22C
Binning: 1x1
Is that what you were looking for?
Thanks again for the nice compliment.
-
Exactly - a well-documented summary of the umage acquisition and processing phases. You learn as much from that as anything else! Apart from anything else, it means that you can go nack over your efforts ans see where 'new knowledge' might help improve matters at earlier points in the process.
Hopefully you will also find a method to store, and document the Processes and Scripts that you are using (by saving Process Icons, Process History and full PixInsight sessions - all with meaningful names, and using the user-comments field that have been provided to allow you to add notes to the processes that you have used, perhaps reminding you why you chose to use the parameters that you did use). Don't forget to save interim Images either - especially those that were used as masks (along with the processing history that was used to create the images in the first place - that is a crucial step, that can often be overlooked).
It can also be useful, when Cropping an image for example, to first use Dynamic Crop Process to get things where you want them to be, and to then transfer the 'dynamic' variables over to a 'Static' Crop Process which can be easier to re-implement at a later date.
Sometimes it is these little things that don't make it into the 'big tutorials' !
I am looking forward to seeing your results after you drop down the small-scale, low-SNR noise (most of which initially seems to be 'Luminance' noise rather than 'Chrominance' noise - but that can be difficult to analyse and identify from a compressed JPEG-type image aimed for display on the Web). Just don't try 'too hard' - less can often be more when it comes to noise removal.
Have fun :)
-
Great first attempt. Try using SCNR green set to Average neutral to start--that should turn the green to gold--or ruddy. Also--a bit of TGV denoise would suit this image well. Extract a lum and use that as local support. Set TGV denoise to RGB and start with a setting of 2.0 (first slider) and 2.0 (second slider). Start with a -4 for the second slider. You may not see much of an effect so you may have to switch to -3. Its trial and error. The goal is to use the minimum amount of noise control to achieve the desired effect. This will smooth out the background and the nebula will pop.
great image. Do you like the RASA? I have the C11Edge and love it.
Rodd
-
Great first attempt. Try using SCNR green set to Average neutral to start--that should turn the green to gold--or ruddy. Also--a bit of TGV denoise would suit this image well. Extract a lum and use that as local support. Set TGV denoise to RGB and start with a setting of 2.0 (first slider) and 2.0 (second slider). Start with a -4 for the second slider. You may not see much of an effect so you may have to switch to -3. Its trial and error. The goal is to use the minimum amount of noise control to achieve the desired effect. This will smooth out the background and the nebula will pop.
great image. Do you like the RASA? I have the C11Edge and love it.
Rodd
rdryfoos, THANKS, but I'm afraid much of what you've suggested is falling on deaf ears (or blind eyes). I'm just starting the PI tutorial #2 which largely deals with NR in the non-linear image. But I'm not there yet, wherever "there" is. The processes you are referring to, I guess, must be for use in the non-linear image. I'll keep them at the top of my list as I "progress." I may come back to you in a later post. Off to Denver this weekend so probably won't be doing much with this until Sunday or Monday.
BTW, I'm not educated enough to comment too knowledgeably about the RASA. My comparative baseline is meager (a C8 with a wedge and Alt-Az mount x 38 years). So the whole new setup is much an improvement. The RASA and CGX seem to work OK. But I'm still struggling with the whole complexity of the system, the pain of guiding and tracking error, and the jargon and rigor of PI. Wish me luck.
-
...I am looking forward to seeing your results after you drop down the small-scale, low-SNR noise (most of which initially seems to be 'Luminance' noise rather than 'Chrominance' noise - but that can be difficult to analyse and identify from a compressed JPEG-type image aimed for display on the Web). Just don't try 'too hard' - less can often be more when it comes to noise removal.
Have fun :)
Pardon my ignorance, Niall, but I don't have the slightest idea what you mean when you say "...after you drop down the small-scale, low-SNR noise...". Can you please explain? Thanks!!
-
Don't worry about it--I know exactly what you mean. It took me a LONG TIME to get to the portion of the learning curve that was flat enough for me to not walk on all 4s! Between the mount and scope, and acquisition software, and guiding, and polar alignment, not to mention the weather, there is allot besides processing that one must master. I have yet to learn autofocus, or some of the more sophisticated tools in maxim DL (I have SGP too--but I'd have to learn something new!). learning new things at this point takes time away from imaging, which I just can't seem to accept.
Keep up the good work and don't hesitate to ask questions. ask allot of questions! As you progress, you will find that the devil is in the details. Often times a so so image will transform into a fabulous image with the slightest adjustment of a few settings.
Rodd
-
I don't have the slightest idea what you mean when you say "...after you drop down the small-scale, low-SNR noise...".
The 'c;assic' approach to astro-image processing is based on looking at the image as 'the sum of its parts' abd, at the very simplest of levels, every single pixel on the image can be considered as not only having an x and y coordinate, and some form of intensity coordinate, but also to be part of 'another x- y world' where one axis is often used to denpte 'scale' and the other is used to denote 'brightness'.
So, at diagonally opposite corners of this new x-y area we have very bright and very large object - such as the cores of galaxies, etc. And in the other corner we might find the 'background' of the image - containing no stars, just other very dim 'noise' with little or no sense of 'object'.
Taking the analogy further, and moving along the 'brightness' or 'intensity' axis we see the 'sclae' of objects increase. This might be where we would expect to find dim gaseous nebulosity, before we finally 'turn the corner' and see the brightness of this kind of nebulosity increase as we head beack to that first corner that I described.
Of course, if you 'turn the corner again, you remain with high intensity image data, dropping from very large objects down to very small objects - and these, of course, would be the 'stars' in an image.
And, making the last turn, you head back down the 'small scale' axis, reducing intensity as you, until you finally get back down into the world of non-descript 'background noise'.
Hopefully, you might now be able to visualise where your image might be capable of some improvement - it is in this area of small-scale (i.e. no discernible structurs'), low-level (.e. 'faint') noise.
And the trick here is to either lower the intensity-level even fursther, or to change the structure-size so that you can't even really make it out in the first place. And, doing all of this without affecting the remainder of your image (remember, astro-imaging is just like juggling with running chainsaws, it becomes easier with practice - and plenty of bandages!!).
The noise that I was referring to is that 'speckle' that you see when you zoom right in on the fainter areas of the image - think about trying to knock any 'colour' out of this - or aim to leave the colour just on the blue side of a neutral grey (this is down to personal prefeence, of course - but at least try and mute the reds and greens, especially the greens - hence the SCNR Process). Also, imaginne if you could 'blur' the whole of this noise section - smoothing it out if you want to thinks of it that way. Your image might then look more 'glossy' - but, don't overdo things, there is nothing quite as unpleasant as an 'artificially washed out' image, so try to keep things in proportion to how the foreground of your image appears.
I hope my explanation helps, and gives you some ideas to experiment with.
-
My imaging system has been down (CGX mount failed) for the past 3 weeks, so I have been spending some time with my first images and with PI to see if I can improve a bit. This is my latest rendition of NGC 6960, this time in CFHT (HOS = RGB) palette. I have used Warren Keller's recommended workflow for NB images and I hope I have furthered my abilities with PI. I believe this photo has more "pop" with greater color contrasts and better noise reduction. I think I like the colors better. Critique welcome.
-
The nebula pops better in the new version but the stars are also more prominent. Perhaps a star mask and MorphologicalTransformation/Erosion to dial them down a little? I'm not a huge fan of either colour scheme, especially the green, but that's just personal taste.
Cheers,
Rick.
-
The nebula pops better in the new version but the stars are also more prominent. Perhaps a star mask and MorphologicalTransformation/Erosion to dial them down a little? I'm not a huge fan of either colour scheme, especially the green, but that's just personal taste.
Cheers,
Rick.
Do you just mean the stars are brighter or do you mean they're somehow bloated or larger? They seem to be about the same in size when I compare the images.
You especially don't like the lime-green (CHFT palette [HOS]) or the blue-green (Hubble palette [SHO])?
I'll try the MT/E and see if it improves anything.
Thanks!
-
The stars are not bigger--but they are allot brighter. NB is false coplor anyway, so the palette is personal choice. Personally I do not like green in my images (except for teh rare PN that has greenish hues). In teh Hubble Palette I always knock the green down with SCNR (average neutral). But that's personal--the image looks pretty good in other respects.
Rodd
-
Thanks. I understand the personal choice element regarding color selection and assignment. But I'm a bit interested in the idea of perhaps some "natural" correlation of color assignment. It is said that the CFHT palette may have such a more natural correlation except for the assignment of SII to blue when SII radiation is really more red than Ha in the color spectrum. So I have presumed that others may think a bit the same way to some degree, i.e., perhaps preferring to arrive at some color scheme that may have some relationship to reality and not simply an aesthetic choice..
When I first tried the palette in this image, it almost seemed like a Quentin Tarantino comic book selection in terms of the bright colors. It seemed difficult to think of this color scheme as having some natural correlation. But then I started to notice that nebular structure (e.g., Ha (red) vs OIII (green) and the colors in between) seems better defined, contrasted, and more visible than in the SHO palette. I like the better definition and contrast. I'm still not sure I'm sold on the bright colors, but I like the better contrast and definition.
I did some experimenting with morphological transformation to reduce "prominence" of stars. There is a difference, but the difference seems only slight to high minimal in magnitude. I had to go to some extreme in getting this result, using "Morphological Selection" and 10 iterations with an "amount" of 1.0 and a "selection" factor of 0.1. Just to see if the tool was doing anything at all I tried a selection factor of 1.0 and got great big bloated stars.
-
Somewhere on here there was a pixel math formula that is supposed to mimic the visual palette quite closely. I can never remember it though-- it is something like
Red: 85%SII and 15%Ha
Green: 85% OIIIO and 15% Ha
Blue OIII
Something like that--there are only 3 filters and 3 colors and I still can never remember the percentages! The above is wrong--but it is similar. The use of Ha is restricted. But then if you use Ha for a luminance you get the structure. Anyway--in teh tutorials and in images I have seen it does look quite natural
Rodd
-
BTW, perhaps somewhat off-topic, is there some way to have color selections other than just RGB? I'm not smart enough to really understand color and color possibilities, but I often find myself wishing I had more selection choice than just RGB. Maybe it's a programming/digital nightmare.
-
There are--but you need additional filters I think.
Rodd
-
Somewhere on here there was a pixel math formula that is supposed to mimic the visual palette quite closely. I can never remember it though-- it is something like
Red: 85%SII and 15%Ha
Green: 85% OIIIO and 15% Ha
Blue OIII
Something like that--there are only 3 filters and 3 colors and I still can never remember the percentages! The above is wrong--but it is similar. The use of Ha is restricted. But then if you use Ha for a luminance you get the structure. Anyway--in teh tutorials and in images I have seen it does look quite natural
Rodd
That's said to be a favored palette of Juan Conajero's. But I hadn't heard that it is felt to be more natural. I've tried it on these images and have just not chosen to go very far with it. Maybe I'll go back and give it another try.
BTW, it's actually:
SII 50% + Ha 50% = Red
OIII 85% = Ha 15% = Green
OIII 100% = Blue
-
Its target specific I think--different targets repsond differently to various approaches. The veil is most commonly depicted in Bi-color (No SII) making Juan's palette impossible. I do like bi color veil allot.
Rodd
-
BTW, perhaps somewhat off-topic, is there some way to have color selections other than just RGB?
It is probably best to 'think backwards, starting from the point of view ( :P ) of the human eye. This is basically sensitive to three 'colours' - the Rd, Gn and Bu that are so familiar to us. So, in order to 'stimulate' the receptors in our eyes, we use the likes of a PC monitor to 'emit' those wavelengths of light, in variying intensities to each other, at different locations on the image. So, our monitors (nowadays) have three LEDs (one each of R, G and B) at every picture element (or, 'pixel'). In days of old, we didn't have these LEDs, so we used 'plasma' and even 'electro-luminescent phosphors' to achieve the same result. And, in fact, a printed picture behaves in very much the same way.
So, we need to have a means of recording or storing all of this colour data - and, in PixInsight (like other software) we do this by using three 'arrays of numbers'. These arrays don't store colour information at all (!!), rather, they store 'intensity' information, for each pixel in the X-Y array of pixels that represent our desired image. But - very importantly - the three individual arrays (or 'colour planes' as they are often known) are each assigned to one of the three primary colours: Rd, Gn and Bu.
So - in our colour images (FITS, XISF, TIFF, PNG, JPG, BMP, GIF, etc.) we can ONLY define intensities for these three primary colours (r, G and B) - nothing else. We cannot directly define Luminance information, nor can we define Narrow-Band data. Remember that critical point - we can ONLY define intensities of Red, Green and Blue.
But, that is not the end of the story - far from it, in fact!
It is entirely up to 'us' to decide what intensity level we want to store at any given pixel, and in any given primary colour channel. In the simplest of cases we often strive for a 'perfect RGB colour match' in our channels, such that the displayyed image is a 'true representation' of the colours we would perceive if we could look at the scene 'live' (and, commonly, the scene as it would appear if it was illuminated by 'white light', which is what we define or local star - Sol, the Sun - to emit).
But, this guy :police: will not come and kick down your door if you chose to, for example, swap the Gn and Bu channels for some personal 'artistic effect'. You can do what you want, you can choose to emphasize one colour, or range of colours over others - you can even choose to 'de-saturate' your image completely, removing all colour leaving you with a simple monochrome, or Luminance, image.
Which finally brings us to the issue of NB imaging! It would be great is we had, say, another three channels of intensities in or (e.g.) FITS image - we could just dump in the H, S and O intensities and off we would go - BUT (and it is a massive BUT) whilst we can add as many channels as we want into a FITS file, we have no means of representing that intensity information on our three-channel monitors - which isn't really a problem given that our three-channel eyes wouldn't be able to decode that information anyway :-\
Instead, what every image processor ('us', including those who might work in a chemical filled darkroom, or on other software packages aimed more for 'brightly-illuminated' image processing) must do is to take all of their source information (WB and NB, perhaps) and then to 'mix' this into the three available channels.
And, once again, 'how' this is achieved is not governed by any rules. Even 'guidelines' can be too strict a term. The 'mix' or 'blend' that a user finally chooses, will always be based on what 'they' feel they want to achieve. And, these desires can be defined by 'science' as well as 'art' - where certain areas of an image might be enhanced by using certain blending techniques (a 'scientific' approach) or where an overall image is besyowed with some 'aesthetic appeal' (an 'artistic' approach) to make the image 'look nice'.
So, until such time as Juan releases PixEyeball v1.0.1 (that uses bionic optical implants to link directly to PixInsight), we have to make the best of those three channels, and figure out our own methids for blending the data together.
-
I did some experimenting with morphological transformation to reduce "prominence" of stars. There is a difference, but the difference seems only slight to high minimal in magnitude. I had to go to some extreme in getting this result, using "Morphological Selection" and 10 iterations with an "amount" of 1.0 and a "selection" factor of 0.1. Just to see if the tool was doing anything at all I tried a selection factor of 1.0 and got great big bloated stars.
I suggested Erosion but Selection should work if you pick a Selection value < 0.5. One or two iterations is usually enough to make a noticeable difference. Check your mask looks OK (I can't see how you could get huge bloated stars if the mask is protecting everything except the stars - maybe it is inverted?) Also check that the Structuring Element looks reasonable (e.g. try 5x5 circular.)
BTW, perhaps somewhat off-topic, is there some way to have color selections other than just RGB? I'm not smart enough to really understand color and color possibilities, but I often find myself wishing I had more selection choice than just RGB. Maybe it's a programming/digital nightmare.
RGB makes a lot of sense, especially if you're imaging with RGB filters. There are alternatives, like the subtractive primaries cyan/magenta/yellow, CIELAB a* and b*, etc. but I don't think you'd find any of them easier to work with ;)
I'm working on an image of the Eastern Veil at present. I started with a HOO bi-colour and then used Sii as a mask to tweak and add some variation to the colours. I'm fairly happy with the colour scheme both looking good and also representing the data in a meaningful way. I'd be happy to share a preview if you're interested.
Cheers,
Rick.
-
I'm working on an image of the Eastern Veil at present. I started with a HOO bi-colour and then used Sii as a mask to tweak and add some variation to the colours. I'm fairly happy with the colour scheme both looking good and also representing the data in a meaningful way. I'd be happy to share a preview if you're interested.
I'm working on the Eastern Veil also. To coin a phrase: "I'll show you mine if you'll show me yours." Actually, I'll show you mine whether or not you show me yours. Except, mine's not a "preview."
I've taken it about as far as I know how at this point. I'm guessing the issues in this image are very similar to what I have exhibited in the Western Veil.
I'm not sure what you mean that you "used Sii as a mask to tweak and add some variation to the colours."
-
I have rendered this image in HOO (no SII filter). I actually believe this may be my best post-processing to date. I did some experimentation and eliminated Deconvolution and Multiscale Linear Transformation from linear processing. I also eliminated MLT and MT from post processing. In all cases with those processes in place I saw image degradation (fuzziness with MT and MLT and star halos [not exactly ringing] with Deconvolution).
In the case of the halos, they were not apparent after deconvolution but only after MLT following Deconvolution. They were not black halos but rather more like mixed color to gray that when present would cause stars close to each other to have a combined halo and with several stars in close proximity would cause the image to appear to have holes in the background surrounded by groups of stars with combined halos, the "holes" having been denoised with TGVDenoise. The holes was subtle when the photo was not zoomed upon, but became more and more apparent with higher levels of zoom.
Perhaps most of you have seen these phenomena before and can clue me in.
BTW, I'm not too sure I like the HOO palette as well as some of the other possibilities. Not bad, but not my favorite.
-
Hi Rod,
Attached is a crop showing the first approximation of the colouring for my Eastern Veil. You'll probably notice the general lack of stars :) I remove the stars when I'm working on NB colour - this idea is borrowed from JP Metsavainio's tone mapping process. I also create a synthetic luminance, which does include stars, which I combine with the starless colour later.
The colouring here is based on a HOO combine, Hue curve to push cyan towards blue, Sii mask with curves to boost green and reduce blue. The Sii mask was created by stretching the Sii and then clipping the blacks (so the mask doesn't include the background.)
Your HOO combine looks good and represents the actual colours of the Veil fairly realistically. I tend not to go for realistic so much with narrowband as you might have noticed.
Did you use a deringing mask with decon? I normally apply decon only to the luminance. I generally find that processing lum and colour separately keeps the amount of weird & unexpected stuff that happens to a minimum :D
Cheers,
Rick.
-
Great Image. I guess "gurus" can do those things. I do appreciate the tips, though. Gives me something to work on. I haven't been doing anything with curves yet. Coming up.
To answer your question: For the HOO image of the Eastern Veil, I didn't use Deconvolution or MLT during linear processing. And I didn't do MT or MLT in non-linear processing.
My workflow was as follows:
1. Dynamic crop
2. ABE of each channel
3. SHO-AIP multi-channel synthesis (HOO)
4. Fast rotation (horizontal flip)
5. Luminance channel extraction
6. HT of pseudo-luminance clone
7. Generate range mask from pseudo-luminance
8. Generate PSF image for Deconvolution
9. HT of combined chrominance image
10. TGVDenoise
I experimented with Deconvolution and MLT during linear processing and I experimented with MT and MLT during non-linear processing but I wasn't satisfied with the results, so, in the end, I left them out.
Attached is an image processed in (SH)(HO)O, said to be a preferred palette of Juan Conajero.
-
Rick,
I apologize for copying your image, but I cropped one of mine (in (HO(H))) similarly. It looks like an owl, in case you hadn't noticed. Head up, wings outstretched, feet trailing. You can see the "ear" feathers and at least one eye.
I really like your image. My colors aren't very owl-like, but I still like it.
-
Great Image. I guess "gurus" can do those things. I do appreciate the tips, though. Gives me something to work on. I haven't been doing anything with curves yet. Coming up.
To answer your question: For the HOO image of the Eastern Veil, I didn't use Deconvolution or MLT during linear processing. And I didn't do MT or MLT in non-linear processing.
I experimented with Deconvolution and MLT during linear processing and I experimented with MT and MLT during non-linear processing but I wasn't satisfied with the results, so, in the end, I left them out.
Attached is an image processed in (SH)(HO)O, said to be a preferred palette of Juan Conajero.
Thanks, Rod. CurvesTransformation is pretty useful so have a play with it when you get a chance. The R, G & B curves are pretty obvious but also take a look at the Saturation and Hue curves. CIELAB L*, a* and b* are useful but perhaps that's a more advanced topic for later.
You mentioned some artefacts in stars after decon. That's probably fixable with a deringing mask. Both MLT and Decon can work well but require careful, measured application. Too much of either can get ugly.
The (SH)(HO)O palette image is nice but I don't know that a simple percentage combine will work equally well for all objects. The variability in NB targets seems very large. For me, at least, every one seems to need different treatment.
I apologize for copying your image, but I cropped one of mine (in (HO(H))) similarly. It looks like an owl, in case you hadn't noticed. Head up, wings outstretched, feet trailing. You can see the "ear" feathers and at least one eye.
I really like your image. My colors aren't very owl-like, but I still like it.
No problem at all, Rod! This area is sometimes called the Bat or the Fangs. It's certainly a very cool region of the Veil.
-
I must say that the filamentary structure of your owl (bat) is much better defined and sharp. Is yours a cropped image or a whole frame? I'm thinking it must be a cropped image, which makes its sharpness all the more remarkable. I'll have to improve my focus!!
I feel pretty happy though. This is my first attempt at AP and PI processing (2nd image, 1st was of the Western Veil). While I'm waiting to get things up and operational again, I'm going to go back and work on the Western Veil image again.
-
Yes, it's a crop, Rod. The complete FOV is roughly the same width as yours, but square.
I think you're doing very well for a new imager!
Cheers,
Rick.
-
I'm pretty sure I have done the rest of it but can only find the mosaic in Ha at the moment
https://www.flickr.com/photos/chrisjbaileyuk/11887299705/in/album-72157639685882563/ (https://www.flickr.com/photos/chrisjbaileyuk/11887299705/in/album-72157639685882563/)
Chris
ps found it
https://www.flickr.com/photos/chrisjbaileyuk/21487469275/in/datetaken/ (https://www.flickr.com/photos/chrisjbaileyuk/21487469275/in/datetaken/)