Posted: July 12th, 2012
I'll be giving a PixInsight workshop this coming July 29th in San Jose (California).
For those in the area or nearby that may be interested, here's the details:PIXINSIGHT WORKSHOPSPEAKER
: Rogelio Bernal AndreoVENUE
: Houge Park, San Jose( http://maps.google.com/maps?q=37.256997,-121.941941&num=1&t=h&z=20
: July 29th, 2012TIME
: 10am to 1pmREGISTRATION
: $50 (drinks and snacks will be provided)HOW TO PAY
Paypal recommended. Send payment to rba (..a..) elistas.com. Please write your name in the PayPal message. If you can't use PayPal, send me an email at the same email address.IMPORTANT
: Unfortunately, registration fees can NOT
be collected at the venue. Payment needs to be done in advance. This is a requirement from the City of San Jose. So please, no drop-by's unless you register in advance.
WHAT YOU NEED TO BRING:
It's recommended to bring your
own laptop (there are power plugs, but it's better if you bring it with
the batteries fully loaded) with PixInsight installed. This is not a
requirement, but I'd expect some of you could follow the workshop while
all of us working on the same set of images. If time allows, we can do a
quick processing of some of your data, but I expect time will be tight.
I'll be available after the workshop for more informal Q&A's.
WHAT WILL BE COVERED:
There'll be a bit for
everyone, and I'll make sure nobody feels like wasting time - that
includes those at an advanced level while I explain the basics, and
This is just a tentative summary at this time of the topics that will be covered:For those who are starting:
- How to take advantage of PixInsight's unique interface
- Image calibration, registration and integration (stacking)
- LRGB combination and debayering
- In depth explanation of most must-know processes (histogram, curves, saturation, DBE, color calibration, etc)For those who are at an intermediate level:
- Star masks: how to build successful masks and when (this is in fact a huge topic)
- Pulling faint data from the noise floor
- Advanced noise reduction techniquesFor those who are at an advanced level:
- Tricks using morphological transformations
- The high dynamic range problem
- The perfect gradient removal
- Multiscale techniques: when, how and why
- Building seamless mosaics
There'll be a time for Q&A, so be prepared to come with your questions!!
Any questions, let me know.
Posted: June 5th, 2012
The other night, my 9 years old girl - who knows about Photoshop more than what she should ;-) and has spent actually dozens of nights with me during my photo outings, even to parties like GSSP (twice) or Calstar (three times) - took our Nikon S6 that we have for family photos and stuff, pointed at the Moon, zoomed in, captured it, and after some Photoshop, this is what she delivered... Well, I later did a 800x600 crop at 100%, no further touchups
I do guarantee that the photo has improved a bit from the original... That nice color balance, the sharpening free of Gibbs stuff... Pretty cool!
Okay, don't mind me if you'd rather, but... how could I not post this photo here, for goodness sake!! Sure, she could have done this a year or two earlier if she had thought of it, but what excites me is that this was her first, and that she went and got it all by herself. Maybe some other day I'll show her how to put the camera - but preferably the Canon 40D, not the Coolpix S6 again - on a tripod and get an even cooler Moon or whatever :-)
Posted: April 25th, 2012
For the AIC 2011, a group of dedicated imagers (Bob Caton, Eric Zbinden, Al Howard and myself) worked on a fine display of The Clouds of Perseus. What most people don't know is that, since producing that image was becoming increasingly difficult and we were almost out of time, we actually had a backup in case we couldn't complete the image on time for the AIC. The backup was also used as a test, to make sure things would look okay in the huge 14-feet display.
We finally did finish on time, not without some considerable imaging and post-processing efforts, and we all were happy. But I figured that by now it would be okay to unveil what was that backup project. Well, see it for yourself!
After AIC, Bob Caton - who is the one who paid for that print - was kind enough to give it to me as a gift. Still, so far, I haven't found a wall area in my house that would fit such gigantic image. Oh but I will, eventually, eve if it ends up in the ceiling! :-)
Posted: April 6th, 2012
My favorite location in southeast Spain, Pinar de Araceli, being as it was a private business (rental of cabins high in the mountain), depended on its economic success to survive. And sadly, it didn't survive. And so the place was closed "for good" when I arrived Spain earlier this past January. Suddenly I had no clue where to go to do my astro thing...
Some research via satellite imagery, light pollution maps, etc. led me to pinpoint a site, away from a barely traveled road, behind an old house in ruins, in the "Bortle gray" area, at about 5000 feet high, near Revolcadores (loosely translation: rollers), the highest point in the Murcia region, up above 6600 feet.
Best of all, it's just a 65-70 minutes drive from home!! And mostly freeway, with the road in excellent condition (I can't emphasize that enough, the freeways here in the Bay Area are an "undeveloped" nightmare in comparison!), no nonsense speed limits, and barely any traffic (on the way back, usually around 4-5am, you could count the number of cars driving in the opposite direction and for 90% of the drive, you'd be lucky if you count more than 10 vehicles), so quite a pleasure to drive, totally the opposite of driving here in the Bay Area! And since driving up to a dark site is a fundamental part of the "dark skies experience" (you do spend many hours driving), this fact alone was really enjoyable.
After a first visit exploring the area, I "signed it up" as my new observing/imaging spot. Good horizons, plenty of space, very secluded area... And of course, very nice dark skies!
I liked it on my very first visit, but 15 visits later I could confirm it's a really good spot with very nice dark skies indeed, after getting many 21.7+ readings, and virtually every night reaching 21.6+ without a problem. It's not the Nevada desert, but when you're under a 21.7 sky (or even 21.6), you don't _need_ anything darker. Yes, even darker skies would be nice if you have them, but it's not really needed in order to have a ball or get excellent data. I couldn't ask for more at just a 65 minutes drive.
Here's a circumpolar showing "the old house":
2 or 3 nights the wind was a problem, but that's like everywhere else. Another unexpected problem a couple of nights was the snow that stopped me from being able to reach the observing spot. After driving a road in this condition for a bit:
and even (dangerously, aka stupidly) making it through some spots like this one (I was driving a low sedan with rather worn off tires, not a nice 4x4, SUV, etc):
And so, as expected, the 100 feet of gravel road to the spot was with over 1 foot of snow - in other words, undrivable with a "regular" sedan. Still, I managed to find other spots, at least for those two nights. One of those two nights, I accidentally forgot my snow pants and with temps below 15F (-10C), I was literally freezing!! Fortunately the sky was excellent, so I put up with the cold and got the darn data ;-)
The other night, well, let's just say that ... yes, I've got skunked once!! :-) Also out of the regular spot (again, this was one of the two nights when snow was a problem), and just by the road. I NEVER set up next to a road no matter how remote the location is, but that night I just knew nobody was going to drive by, not only because it's an almost never traveled road, but mainly because with the condition of the road, I knew it would be nearly impossible that someone would make it through the mountain pass. Em... Like I said, I've got skunked and I set up all my gear for nothing:
Other than that, all other nights were truly enjoyable (most of them without any snow, by the way). Some of those nights I even "invited" a few local astro friends who also seemed to enjoy the newly found location very much - so much, they all repeated :-)
Talking about friends and company, a big thank you goes to all of them for the excellent company, and particularly kudos to Onofre for bringing the corner store with him (snacks, hot coffee, good Spanish wine, even some high grade spirits and what not). Onofre is probably the only person I know who would set up a tent for a one-night pack-before-sunrise session:
During all those 15 nights I pretty much only worked on my 54 frames macromosaic of Leo:
I really didn't do anything else (could I possibly have time??? we're talking about 54 subframes here!), although one of the nights, during a "break" I also decided to do some quick exposures of the most fascinating comets of the season:
Now I'm spending a couple of weeks back in the Bay Area, but personal matters demand me going back to Spain shortly for at least a couple of more months, so as long as the weather continues cooperating, I'm definitely looking forward to visit this spot - now commonly referred as "Revolcadores" by the local amateurs - very very soon!
Posted: October 29th, 2011
Clouds of Perseus is a collaborative project between Bob Caton, Eric Zbinden, Al Howard and myself. This article describes how this project came to be, from beginning to end.
Part I mainly talks about the planning and data acquisition.
Part II will talk about the post-processing of the data into the final image.
Part III will talk about the making and testing of the light box.
Early in the summer of 2011, Bob Caton approached Eric and I and told us this idea he had to build a huge lightbox to display at the following AIC (Advanced Imaging Conference), displaying some astrophoto. Of course, the idea was to capture the image by ourselves.
Eric and I agreed to work on capturing the data (Al would join later). Of course, we also had to decide what to photograph. Originally Bob had the idea to do a large mosaic in the Cygnus area. I hesitated alleging that the Cygnus area had been photographed plenty of times, and although it would certainly look amazing in the lightbox, we should try to go for something less "ordinary".
I suggested something in the region of Perseus and Taurus that would include the California nebula, or the Pleiades, or both. This suggestion came from knowing well this area is crowded with interstellar dust. Eric immediately agreed and so did Bob.
Timing is everything
We had one problem, and not a small one. In order to have the light box ready for the AIC, we only had until September's New Moon period to finish capturing the data, as the printing of the negative had to be done by mid-October.
It takes a quick look at any planetarium software to see that this area of the sky is impossible to photograph early in the summer, and if we were to use the months of August and September, we would have to wait until at least 1-2am in August to start imaging, and not much earlier than that in September. This limited our imaging time considerably, to the point the idea was almost rejected thinking that we might just not have enough imaging time.
It is worth mentioning at this point that neither Al, Eric or I have permanent or remote observatories, and all the data would have to be captured "in the field", that is, driving to dark sites every night we had to capture the data. Also, because we wanted to go deep, moderately dark skies were not sufficient - more on that later.
Because the light box was going to have very specific dimensions, we had to define our mosaic according to the proportions of the light box. Although the idea of an image including the California nebula and M45 was very appealing (I would know, as I captured this area back in 2009), this constraint with the proportions forced us to choose one or the other, and then try to define a field of view that would "make sense". We settled for the California nebula, and defined the FOV towards IC 348 and NGC 1333. It took a few tries... We first agreed we didn't want NGC 1499 too much to the left of the image, and with that in mind, Bob presented a framing very well balanced, I made a small adjustment, and we agreed to go with it. Bob punched in the coordinates so we all would get the very same numbers, and we finally were in agreement. Although Eric and Al were going to use a FLI PL 16803 camera, we agreed to define the mosaic using the FOV of the 11000 CCD - basically, using the smaller FOV that was going to be used.
We agreed that Bob would capture H-alpha data from his observatory in Modesto, for the area of the California nebula, and then, Eric, Al and I would go for the LRGB data. I pointed out that the best way would be if each of us were assigned certain panes, and then did the complete LRGB for those panes - as opposed to, say, have Eric capture the red data, Al the green and me the blue.
As the New Moon period for August was approaching, and since nobody else was "making the decision", I quickly assigned panes this way:
Bob: Panes 2 and 7, H-Alpha only
Eric: Panes 5, 9, 10 (LRGB)
Al: Panes 3, 4, 8 (LRGB)
Rogelio: Panes 1, 2, 7 (LRGB)
This selection wasn't completely random. Eric and Al's areas were for the most part, adjacent, which made sense since they were going to use the same type of camera. And the area of the California nebula was assigned to the STL 11000 (me), since it is a high signal area compared to the more dusty areas. Yes, pane 1 (also assigned to me) only contains extremely faint dust, but in order to meet the other conditions, it was either pane 1 or 6 for me.
If you notice, yes, pane 6 is missing in the list above. With three panes per person (fair and square), what I suggested was that whomever finished his three panes first, would go for pane #6. By the way, ultimately it would be Eric the one who captured pane number 6.
We also agreed on capturing a minimum of 4 hours of luminance per pane, as well as 2 hours per color channel, all bin 1x1 of course, and with 15 minutes subs.
When doing the numbers, counting the nights, considering the target wasn't well positioned in the sky until late at night, etc. we knew that the weather had to cooperate 100% every single "New Moon night" we could go out to image, or we would just not make it. Stressful? Just a bit :-)
Capturing the data
Al, Eric and I started collecting data on August 27th, 2011, at the DARC Observatory, while Bob also started capturing H-Alpha data at his home-based observatory in Modesto.
Eric and Al used a very similar setup, with AP900 mounts, Takahashi FSQ106 scopes and the FLI Proline 16803 camera. I used a Takahashi EM400, another Takahashi FSQ106 scope, and the SBIG STL11000 camera. Bob also used a SBIG STL11000 camera, and yet another Takahashi FSQ106 telescope, all on a Paramount ME mount. The fact we all were using the same type of scope and that all cameras had the same pixel size, made everything a bit easier, as no resampling would be needed during mosaic assembly or post-processing.
A few words about the DARC Observatory. It is actually property of Bob, sitting in the blue zone (Bortle scale), and it's about 120 miles away from where Eric and I live, and about 110 miles from Al's home.
This means that every time we would go to DARC to capture data, we would have to drive 120 miles to the site, usually fighting the evening rush hour traffic, and then 120 miles back, in the wee hours of the night - most of the times fighting the morning rush hour traffic! That's not just 240 miles for every single single session, but 240 miles that includes dealing with rather heavy traffic conditions for about half of the trip. An of course, taking everything out of the car and setting everything up at the beginning of the session, and tearing down and packing it back to the car at the end, every... single... night.
Here's an image of the three scopes, set up in the open dome at DARC, capturing data for the project:
Left to right: Eric's, Al's and my scope. The distortion you see in the image is due to a 10mm lens being used to take this photo. If you pay attention you might even see Eric's "ghost" sitting at his table :-)
I believe Eric imaged almost 14 days in a row (!!!). I know I did 8 days in a period of 14 days, and although I don't remember Al's schedule, it was just a tad shorter than mine. Some simple math then tells us that if you were to add the miles driven by Eric, Al and I during these14 days of New Moon period to the DARC Observatory, we'd be talking about approximately 7,000 miles and over 110 hours at the wheel. I've lost track of the time spent at the site, but the numbers are probably just as crazy. The fun part? We were not even nearly done, and another Herculean... I mean Perseulean effort coming up for the next New Moon.
Besides everything I've mentioned, driving 2 hours to a dark site during weekdays is also not easy business, as we all have to, well, WORK the next day, so although neither of us were using automated software to program our sessions, whenever we had a chance, we would set up a cot inside the observatory and try to catch a few zzz's... just not too many, and certainly, never more than a few in a row... here's my cot waiting for me:
By the way, don't be fooled :-) Although the image is all bright and the room looks nice and lively, during our imaging sessions we keep the building with minimum red lights only, so the atmosphere in the room is always rather gloomy when we're there doing our thing. Don't get me wrong, I love DARC, I just don't want you to get the impression that during our imaging sessions we enjoy "normal" lighting and activities :-)
The new moon during late August and early September wasn't nearly enough to complete the mosaic. We needed great weather during the next New Moon, and so we waited patiently for it. And when it came, although the forecast for the first few days wasn't so great, Mother Nature gave us a break, just long enough.
This time around, Al and Eric continued going to DARC, but I actually headed to the Central Nevada Star Party (CNSP), seeking some of the darkest skies in the country. A rather moody weather didn't give me those greatest skies the CNSP is famous for, but I was able to finish my part under fairly good skies after all - just not nearly as dark as the site for the CNSP could get. Al and Eric also managed to finish their part, not without some sacrifice as in "I wish I were done, I could use some rest, but I need to go again tomorrow" perhaps a couple of times.
Take away one night and we would have not been able to finish. It was THAT close!
But the weather cooperated and gave us all the nights we needed, and finally, after another intense New Moon period, we were ... done!!
Of course, I had this little incident I wrote about the other day, but the outcome from that night was good, and so we finally had all the data we wanted.
With all the data, what was left for us to do was to calibrate our subs, generate our own master L, R, G and B - and H-Alpha for Bob - and share the masters with the team.
There was not one particular person assigned to do the processing. The idea was basically to share the data and whomever wanted to have a go, just do it. In the end, only Eric and I decided to go for it, and he ended up having to concede the work to me, as he got extremely busy at work and with not enough time to do his processing on time. In the next part of this article I will talk about how I processed the data, from mosaic construction to final presentation, and everything in between.
Posted: October 6th, 2011
I don't know if this is "extreme"... To some, this story may just sound nuts. Others, I think they understand perfectly what I'm talking about and have their share of stories even much more extreme than this one... Yeah.. I think sometimes I've gone to extremes way beyond than what I'm about to relate, but the title seemed catchy enough, and so it'll stay :-)
Well, it's no secret anymore that a few colleagues (Bob Caton, Al Howard, Eric Zbinden and myself) have invested over 125 hours of imaging (and probably even more of driving combined) in producing a macro-mosaic that will be displayed at this year's Advanced Imaging Conference 2011.
Bob Caton standing in front of the huge light box
Despite we haven't started to process the data just yet, what has been done so far is quite a feat. Still, two days ago I realized that the green data for one of the frames assigned to me was just rubbish (I'll spare the details), so here I am sitting, Thursday October 6th, loading up my car, because tonight, with a big bright Moon that sets at 3am, I have the very last chance to head up to a dark location and capture the minimum 2 hours of green data that I need - from 3am to 5am - otherwise the project will not see the light in time for AIC.
Yes, I work tomorrow. And yes, I'm tired and sure enough, ready to go to bed... otherwise... And although the forecast is for clear skies, temperatures in the mid 30s are expected (hey, those of you up north, this is California and it's only early October... ok? ;-), and extreme high humidity. I would never go out to capture data for a 3-5am session only, but if I don't do it, all the hours already invested would have been worthless, meaning the image will never make it on time for the AIC event.
Where to go? I could go Montebello, an easy 30 minutes drive from home, but the skies really have little to do with the skies we all have taken the data. Sure, it's "just" color data, but still. That would also mean breaking the Montebello OSP rules, which dictate that we can be there for astronomy up until 2am and no later than that!
I could go Henry Coe... This target will be up at the zenith between 3-5am and Coe's skies would likely be sufficient for green data, but let's face it, Coe can feel spooky at times when you're up there all alone (or in the company of mountain lions), and the of arriving at Coe around 2am, under near freezing temps and over 95% humidity, just is NOT my idea of having fun. Or I could go to the DARC Observatory, a rather safe and not spooky at all site, but the 2 hours drive that requires to get there - or better said, to get back home - would conflict with my schedule: I would get out of there not earlier than 5:30am, meaning getting at home - after fighting morning rush hour traffic - around 7:30 or later, and my kids would be late for school...
So the choice is between Spooky Coe or Bright Sky Montebello (and risk my access to the site if caught there after 2am). Since I don't want to break any rules that would compromise my access - or anyone else's - to Montebello, I guess the choice is clear, and in about 2 hours I'll leave for Henry Coe State Park, and deal with the spookiness, the cold and the humidity all night long pretty much until sunrise. That of course, assuming my access to the overflow parking lot is not stopped by a gate I cannot (legally) unlock, as I haven't been there in months!
Now, I mentioned earlier that having to leave at midnight to a far dark location and fight with the cold, humidity, and yeah, spookiness, is anything but fun, but here's the thing... Despite it would be very very nice to simply hit a few buttons from home and capture the data remotely while I just go to sleep, as many people do nowadays, nomadic imaging is not only about the data, the processing or the presentation, but also about the adventure and a million other things. And despite the difficulties, the inconveniences, the lack of sleep, the expense (gas is not cheap!) and everything else that comes with it... At least today, in the end, it makes it all much more worthy, at least to me, and I wouldn't change that for the world.
And trust me when I tell you that Eric and Al also had their share of
issues, such as driving 2 hours to DARC and then being able to capture
data for barely one hour due to clouds, then drive 2 hours back home
late at night, tired and all...
Maybe to some people - not everyone! - astrophotography does taste
better when you actually have to sweat it, I don't know... I still
dream about a remote observatory, don't get me wrong. But that won't
stop me from taking trips to dark sites. The best image cannot compare to being under the stars.
And in the end, if everything goes according to plan, I'll get that green data and on November 4th, there'll be a giant lightbox at the AIC for everyone to, hopefully, enjoy.
Well, everyone but myself, as, things being what they are, and despite I haven't missed one AIC ever since I'm into this hobby, this year my "astro budget" has severely reached its limit and I cannot afford the registration fee for this year's AIC. Either way, if you go AIC this year, I hope you enjoy the exhibit and maybe remember that getting all the data wasn't all that easy :-) ...
Posted: September 12th, 2011
Can an astrophoto represent reality of what is out there? Can an aesthetically-driven astroimage have scientific interest? Can we talk about science versus art, when comparing astroimages that have been minimally processed with images that have gone through some more complex post-processing? Do minimally processed astroimages have more value than those with a more involved post-processing?
These being recurring topics in the astroimaging community, I've decided to post my thoughts here - it will make it easier next time someone brings these issues, once again, somewhere.... :-)
(I use the terms "minimally processed" and similar throughout this article referring to images that may only include during post-processing a small set of operations such as deconvolution, DDP, some non-linear histogram transform and little more. It is not meant to be a derogatory term in any way.)
And here's what I think...
Can an astrophoto represent reality of what is out there?
I believe that in astrophotography there's no such thing as a natural or realistic appearance. Reality in an image is just impossible to depict, and even more so in astrophotography. The reasons why I strongly believe this might take some writing, and there are other points I'd like to cover without you falling asleep before you get to them, so I'll probably go back to this topic at a future date. For now, just think for a second: we're trying to represent objects and structures that are thousands or millions of light years away and that are often larger in size than what our mind can even conceive... and we're doing that right in front of our eyes, and in a monitor that at most is just a few inches wide (not to mention the extremely poor dynamic range they can represent). How's that for real?
Can an aesthetically-driven astroimage have scientific interest?
Actually, I don't think scientific interest is something that needs to pass the "is it minimally post-processed?" test.
The way I see it, there will be aesthetics-driven images that might ignite some scientific interest, and likewise, there will be images minimally processed that may never attract the interest of scientists at all. It's quite simple. For example, when some astronomers saw this image I took of the Virgo galaxy cluster, (overprocessed to some, of course) they contacted me to provide them with a non-linear stretch of the raw data - which I did, and it proved to be quite interesting (I cannot say more than that at this time, sorry). Should I have not pushed post-processing with some techniques such as HDRWT, wavelets, morphological transformations, etc. the image likely wouldn't have ignited any "scientific interest" at all. Need I say more?
Yes, if your post-processing has introduced artifacts that haven't been seen before, you might ignite some scientific interest for the wrong reasons, and that's why you must be careful not to introduce such artifacts! But other than that, this debate is quite simple, there shouldn't be a debate.
Can we talk about science versus art, when comparing astroimages that have been minimally processed with images that have gone through some more complex post-processing?
This is another topic that I don't quite know why it's brough out so often. It's as if there's some sort of consensus that astrophotography needs to be separated into the "science approved" images and "astro art" or something, when the way I see it, that's a neither and nor...
I don't usually consider an astrophoto to be pure "science" once the image is no longer linear, so, unless the post-processing involved incurred in blatant inventions, I don't usually make the distinction of "science vs art" with images that are non-linear when presented to the viewer, regardless of the amount of post-processing.
So when these topics come up at mailing lists, web forums or even conversations, I don't think it's correct to talk about "science vs art" as if those folks who do minimal processing to their images are producing science-approved images while everyone else is doing just "astro art".
Of course, some of these folks will tell you otherwise, but the way I see it, in most cases, both groups are producing something that has inherited qualities from combining both disciplines - art and science. And that is what astrophotography really is, as far as I'm concerned. In simple terms, if it's only science, you're doing astronomy, and if it's about aesthetics and nothing else, it's likely just art. To me, astrophotography is a bit of both - yet not a whole lot of either - and if one of them is missing, then it's something else.
Are we not respecting the data when we apply post-processing techniques such as the star reduction method described here? Are we being unethical?
I disagree that techniques such as the "star reduction" method (link above) and many others show no respect for the data (more on that later) or are unethical, but regardless of what you think, to me, bringing up that question is, once again, missing the point completely about what the value of astrophotography is. In the next paragraph you'll probably understand - agreeing or not - what I mean by that.
Do minimally processed astroimages have more value than those with a more involved post-processing?
What I believe is that an image can have documentary value, whether the pixels around a star have been dimmed, have inherited values from the surrounding pixels, or have been left intact after some operator-chosen non-linear histogram stretch. And such documentary value can be just as valid regardless of which of the previous operations have been performed.
What matters is the intent of the operator and there's a lot one can say about that. Of course, one can start "inventing features" to the point the image may lose its documentary value. This is not to say that doing such things are necessarily "wrong", because astrophotography can also have a worthwhile emotional and aesthetic value despite some people who say are science-driven may ridicule the idea... When you go that far, it simply means that the image no longer has documentary value, and it should be treated, viewed and analyzed as such.
The issue about "respecting the data" and "ethics", when brought up as a matter of keeping the post-processing to a very minimal set of operations, besides being quite recurrent, usually brings issues that to me, don't mean much because, as I said, to me, they often miss the mark.
As I said earlier, to me, generally speaking, if you like to analyze data, you should stay in linear-land, and once you cross that line, what matters is whether your image has documentary value. Whether you are content by doing a couple of post-processing operations such as deconvolution, a non-linear stretch, and a few more, or take advantage of many other post-processing techniques that can indeed enhance the documentary value of your data, that's a personal choice.
And within that personal choice, in some cases, and depending on the goals, a minimalist processing may in fact be a very good choice for bringing up some very good documentary value in an astroimage (some people in fact favor the look of such images, but we're not talking about the look of astroimages here, so comments in that regard aren't needed in this discussion).
What I believe however is that by limiting yourself during post-processing to a restricted set of techniques "because I want to respect the data", laudable as it might be, you might also be missing an opportunity to increase the documentary value of your image, while still respecting your data. And here's the thing... As long as you are increasing the documentary value of your image, you are respecting the data, simply because you're utilizing the data to present an image that not only holds value above purely aesthetics, but it also maximizes what is really worth. It is only when you depart from that documentary value when your respect for the data decreases.
So, if maximizing the possibilities of your data with the purpose of increasing the documentary value of your image is - according to some people - a "disrespect" for your data, what exactly is to NOT maximize it and produce an image with a likely less documentary value?
Of course, some people may not buy this explanation about "documentary value" or may view it differently. If they did, I wouldn't have a reason to write this, now would I? :-)
In any case, if you want your image to possibly miss on that increased documentary value because of the way you value your data or because of your beliefs of what is ethical or not, that's fine. As I said, that can be a valid option. However I have no desire to show complacency to any statement that regards astrophotography as a discipline that should limit itself to a couple of simple techniques during post-processing in order to have value or to be considered ethical, because, as stated, IMHO, where those who think that way believe the value ends, some of us take on and keep on adding the value they ignore, disregard or are simply shortsighted enough to not recognize it.
From this point of view, I don't think minimally processed astroimages have more value than those with a more involved post-processing - to me, often times it's quite the opposite, in fact. And this is without getting into the topic of aesthetic value, because that's another deal, not to be disregarded.
An image is worth a thousand words
Recently I took an image of the Great Square of Pegasus, a 20 panes mosaic. After all the work of putting the mosaic together seamlessly, my first post-processing steps were basic non-linear histogram adjustments. Right before I started to utilize more advanced techniques, the image looked pretty much like this:
That is the equivalent of what some would describe as "minimalistic processing". And I could have stopped there. And that image has some undeniable value. But a strong (linear or not) inverted stretch revealed a lot of faint structures, and I wanted to visually document those structures. Not measure, not analyze, simply trying to produce an image that would be able to show the shape, position and relative surface brightness of those structures, hopefully without destroying the appeal of this starry area of the sky. For that task I knew I had an arsenal of techniques - not tricks - that could aid me in reaching that goal (and for those interested, no, such techniques don't involve the use of the brush, lasso or similar tools). So there was my choice. Should I stop here and present an image of lots of stars, or go further in the post-processing? Well, to me it wasn't even a choice. I knew I wasn't going to stop there... A reduced version of the final image is here:
Now, when you end up with an image like the one above, you have to expect that some people is going to say - or think - that the image has been overprocessed, perhaps even say things like "those clouds of dust look like made out of plastic" and other nasty stuff. Um... How is it possible that supposedly smart people can in fact react with such ignorant comments? Let me tell you upfront that the dust clouds you see above not only exist and are up there, but their shape and position match exactly what you see in the image, at least to the point I was able to capture (not post-process) their signal. All that stuff was in my data, but the only way to make it surface was by using post-processing techniques that those who defend minimally processed images either don't know or at best, don't want to use (more often than not, they really don't know - after all, why learn about something you're not interested anyway?).
Is this all about beauty? If I cared about just beauty, why would I want "my" dust clouds to look like plastic? (that's assuming that's how they really look like)... Now I ask you... which photograph better documents what's going on up there? Why should I limit the processing on this image, due to whatever some ethics dictate, and show a patch of the sky with nothing but stars and a few tiny galaxies, when I could greatly increase its value and show all what really is going on, even if that means pushing the data to its very limits? Maybe ethical in this case means we'd rather not see what's behind all those stars? Well, I do. Final words
Everything you've read so far is not meant to justify aesthetics, documentary driven astrophotography or advanced astroimage processing techniques. To me, they're plentifully justified and need no exculpation. This article is simply an attempt to share my views on a much discussed topic, for which I think some people, for whatever reason, tend to disregard or downplay astroimages that include more than a simple non-linear stretch, while, in my very humble opinion, as stated, applying advanced post-processing techniques can be used to increase the documentary value of your data.
Last, let me add... While it's true that some people resort to "easy Photoshop tricks" to post-process their astroimages, advanced image processing techniques aren't what I'd call "easy" tasks, and calling tricks to anything that goes beyond a non-linear stretch often simply denotes either ignorance or arrogance - usually both. Advanced post-processing techniques require study, learning, experimentation, patience and sometimes frustration, unlike minimalist processing which often times doesn't require any of that. You can choose to use them or not, but be respectful with your peers when you state your opinios, otherwise, the only one who may look clueless will be you - although of course, you will never ever think that's the case (back to arrogance and ignorance).
Of course, learning and experimenting with new post-processing techniques and paradigms can too be challenging, rewarding and fun. And who is to tell others how they should have fun? Aren't those some of the most valuable reasons for which we embarked in this journey after all?
Posted: September 11th, 2011
If you've been following my whereabouts when I go to Spain to do astrophotography (usually during summer time and Christmas), you probably know that Pinar de Araceli is my favorite location. I actually feel privileged because this being one of the darkes sites in Spain and being at a decent altitude (1680 meters / 5,511 feet), it's only a "short" 1:45 drive from my home here in Spain (Murcia) without speeding too much (speeding limit in Spain's highways is 120km/h or 75mph, so that helps a bit).
This past summer I visited Pinar de Araceli 6-7 nights, and although I didn't take many images of the scenery, I did manage to take a few, and I decided to write this small photo-documentary, hoping to transmit a bit of the experience, although of course, as with any photograph, nothing beats to actually being there.
Most of the nights, my SQM easily reached 21.7 which I consider pretty good and average for the site. Anyway, here's the report... Enjoy!!
The road to Pinar de Araceli from Murcia - where I live when I'm in Spain - is quite comfortable for the most part. Lots of highway and only for the last 20 minutes, climbing up the mountain, it gets narrower and winding, but nothing we're not used to, right? This photo below is actually from the easier part of the drive to the top, but I like the stone-made blocks at the side of the road.
By taking a quick detour, you go through narrow roads such as this one...
...that'd take you to a nice vista point of La Sagra, a 2,383 meter (7,811 feet) mountain peak formed mainly by limestone and loam. Beautiful in winter when it's all covered in snow, can't keep up with the hot summer months. It looks like a peaceful, nice mountain top, but it's not always like that (check this video on YouTube from a few guys at the summit during less than ideal conditions).
Following the detour to La Sagra, in the middle of absolutely nowhere, this house appears. It has a name but forgot... No, it's not a Mission. This is Spain, not California :-)
Only 4km from El Pinar, you find a sign telling you that you're 27km from Nerpio. Why do I mention that? Well, if you've ever used any of the GRAS telescopes, you probably know that the scopes they have in Spain
they're actually in Nerpio. Guess it's also kind of like "you're in dark skies country" zone or something ;-)
Once at the Pinar, you see some of the famous cabins. They have about 20 of them...
It's getting late, so we'd better start setting up!
So here's my gear in front of the cabin, one of the nights I was up there all by myself.
And here's another photo of my setup capturing photons!
I couldn't leave without taking a photo of the majestic Milky Way and the cabin. This was actually on a night when there were several of us doing our thing...
As the Sun starts to rise, some amazing colors make you feel the night was good, and worth the trip.
Here's a few photos I took going back home one of the times I went there. This one is at one of the peaks, at around 1600 meter high (5,250 feet), right before sunrise:
And what do you know... Once you're out of the sierra, on the way from La Puebla to Caravaca... Fog!! Yeah, we get that here too ;-)
If you're like me, and tend to head home around sunrise, on the way back you may get rewarded with some nice vistas such as this one...
Or this one...
Then... Back to reality! It was good to be there. Hope to be back soon!
Posted: February 7th, 2011
If you live somewhere around the San Fransico Bay Area, these weather links may be helpful to you. They are to me!Clear Sky Charts
NOA Weather Forecast
Satellite Images and Models
When a star party approaches, I make sure the weather's going to be good, before taking the hike! Here's a bunch of weather links for the start parties I usually attend...Lake San Antonio (Calstar)
GSSP (Golden State Star Party)
CNSP (Central Nevada Star party)
We don't want these to happen, but...
Unfortunately, sometime, they do happen, and while clear skies is often the least of concerns for everybody, it's not a bad thing to be informed...
Very local stuff is sometimes
important, for me at least.
Posted: January 16th, 2011
A few days ago I wrote an article about my thoughts on HDR compositions for astro images, and why I felt that astrophotographers should take advantage of HDR composition tools when confronting certain dynamic range problems, rather than relying on hand-drawn selective overlays, so I figured a good complement to that article would be a few examples of using these techniques in action.
We can find dynamic range problems in all kinds of images, but we tend to associate dynamic range problems to situations with images that contain areas that "burn" easily. For that reason, for this article I will use the most emblematic object in the sky that comes to mind when one has to solve this particular problem: M42.
To keep things as simple as possible, I will use only two images: one whose exposure time was 5 minutes, and another one with just 10 seconds of exposure. Of course, each image has been constructed from a number of subframes (6 each to be exact) and has previously been calibrated and gradient-corrected. This is not high quality data - to be honest I quickly acquired it during one session just for the purpose of writing this article - but it should be good enough to illustrate these examples. For better results, I would recommend using at least three different subsets - the exposure time would vary depending on your camera, optics and sky conditions, but a good base would be a subset of 10-15 minutes exposures, another subset of 3-5 minutes, and a third set of just 10 to 30 seconds.
Back to our set of two images, by doing a basic non-linear stretch to each image, we can reveal what's in each of the images:
As you can see, the 5 minutes image contains a lot more information in the outer areas of the nebula, while the 10 seconds image barely has any information in the same area. Likewise, the core of the M42 nebula appears completely saturated in the 5 minutes exposure due to a limitation in the available dynamic range, while the 5 seconds exposure does show most information in the same area.
Since I have shrank the images in order to make them fit in this page, here's a closeup of the core of M42:
Now I will use these two images and perform an HDR composition using three different packages:
1) Photoshop, if anything because today it is still the most widely used image processing software for astroimages.
2) Photomatix, mainly to see how an application especially designed to perform HDR compositions can be used for HDR compositions of astronomical images.
3) PixInsight, for being possibly the best software today entirely developed towards astroimage processing.
Note that the purpose of this article is not to see which of these packages does a better job, but I will talk about that later.
The latest versions of Photoshop come with a "Merge to HDR" tool. Although I admit that this tool is not my favorite to do an HDR composition for a number of reasons, it is rather easy to use.
One of the most important limitations of Photoshop's HDR tool is that it only accepts 16 bit images, so the first step is to reduce the bit depth of our images from 32 bit float to 16 bits. Of course this is not ideal, but he have no choice.
Now I execute the "Merge to HDR" tool, which in CS5 is accesible by selecting the File menu option, and Automate submenu. Usually you will be asked to set the EV (exposure value) manually. You can either accept the default calculated values, or enter your own. Usually an EV spacing of 1 to 6 should suffice.
Once you OK the "Manually Set EV" dialog box, you'll be presented with the "Merge to HDR" tool. Here's where you do most of the work of making sure your image looks the way you want. Make sure the Mode is set to Local Adaptation and adjust the parameters. For a composition where the problem is mainly in the highlights, you will want to compress the highlights (set a small value for the Highlight option), leave the Gamma and Exposure values alone (or make very minor adjustments), and depending on your preferences, compress or leave alone the Shadows value. All remaining values (Details and Edge Glow parameters) can be adjusted to your own liking.
If you're still not quite satisfied with the results, you can adjust the histogram curves (notice the Curve tab). If the merged image just doesn't look right no matter what you do, you may want to go back and use different EV values for each image.
When you're happy with what you see in the preview, hit OK. Here's the resulting image I obtained, without any further processing. As you can see in the resulting image, I did not work the background at all, but that was a personal choice. Notice that the "structural" detail that you see in the image is not caused by the HDR composition per-se, but by the edge enhancement and sharpening tools conveniently included in the Merge to HDR dialog box. Of course, at this point, you could (should) continue processing the image...
Photomatix is a more versatile tool than Photoshop for HDR compositions - it is after all a software designed for this task. It is also a software I'm not particularly familiar with, and I'm certainly not an expert in using it. One thing is clear: it's not a software with astroimage processing in mind, not even astroimage HDR composition. I'm including it here because it's probably one of the most popular HDR composition applications out there, and I do think it's beneficial to see how such programs fair when it comes to use them with astroimages.
Photomatix seems to accept 32 bit images, although it takes a really unusual amount of time to load them, and in the tests I've run it could never interpret them well, so generally speaking you'd probably be feeding Photomatix 16-bit images only, just like in Photoshop, which is what I had to do in this case, again not being an ideal situation. Photomatix does actually create intermediate 32 bit images, which is cool, except for the fat that once you're done, you can only save them as 16 bit images (!!).
Just like with Photoshop, once you select the images you'd like to combine, Photomatix will ask you for the EV value, and again, you just have to make an educated guess. A value of 1 to 3 would work in a case like the one at hand. Do not ask Photomatix to show you the intermediary 32-bit HDR image.
After entering the EV value, Photomatix will offer you a few "processing options". I wouldn't use any of them, except for perhaps the noise reduction. Then, you're ready to adjust the HDR combination parameters...
Photomatix offers a number of presets, each of them with their own set of distinct parameters. My recommendation is to try the Exposure Fusion method first, and only if you don't get results that you like, try the Tone Mapping method. Since it's very easy to preview the different presets, just click away and adjust the parameters each method offers.
A word of caution: Photomatix can do really fancy stuff to your images. Although it does a good job creating a HDR composition, I suggest using it very gently. Keep your eye on the ball - you're not using Photomatix to process your image, but just to combine the different exposures. Get that done, and go fancy later with your usual image processing software. Here's just one of the many possible results that can be obtained with Photomatix with just a few clicks and a few slider adjustments:
The above image appears - to me at least - rather soft, so during post-processing I would probably apply some edge enhancement and other features to push the contrast of the image a bit further.
PixInsight is an astroimage processing application, so it usually lets us work the way we want with out images. To begin with , it allows us to work not only with 32 bit (int or float) images, but even with 64 bit float images. The HDRComposition tool in PixInsight can also work on linear images, and when doing so, it too will return a linear image, which is ideal to continue processing the image linearly after the HDR composition. It is, in fact, recommended to use it with linear images. However, for this article I will use the same set of two already stretched images, mainly to use the very same data on the three applications I'm using in this article, and later I'll write another article showing how to do the integration of these same images linearly with PixInsight's HDRComposition tool.
The HDRComposition tool in PixInsight is under Process > ImageRegistration. Please note that there's also an old script using the same name under Script > Utilities, but I strongly recommend using the module under ImageRegistration. Besides implementing a better scaling algorithm, the new module is much more robust, accurate, fast, and it does a lot of the thinking for you.
Just as with the previous examples, I will not go on detail about what each of the parameters and options do. Instead, I will simply comment on the adjustments I made for this particular case.
The HDRComposition tool in PixInsight doesn't ask us for exposure values. The tool itself does all the calculations to determine the weights
Leaving the highlight/lowlight limit parameters with their default values, I may adjust the binarizing threshold to an amount that seems to cover the overexposed areas well (I used the 0.80 default value this time), and increase the mask smoothness (15) and maybe the mask growth (not this time), to generate smoother transitions in the final composition. Leaving the "Generate a 64-bit HDR image" option activated is nice, although you'd probably also obtain good results by producing a 32-bit image. In the end, as you see, the only parameter I've adjusted is the mask smoothness, and even the default value would probably yield good results. Bottom line: once you've added the images that will be combined, doing a HDR composition in PixInsight can often be a one-click operation.
Right after the HDR combination, which produced the image in the picture above, I would usually run one HDRWT pass (HDR Wavelets Transform) to enhance the local contrast of structures in a multiscale fashion. The old script I mentioned at the beginning included this option in the HDRComposition dialog box, but this option was dropped in the newer module, and needs to be run separately. Needless to say, if you combined linear images, you should do first a non-linear histogram adjustment. Here I also used the default values - ok, now it's a two clicks operation :-) This is the "final" result I obtained from running HDRComposite and HDRWT:
As in the other two examples, you should probably continue processing the data to craft the final rendition of the image.
First of all, as I said earlier, I would like to make clear that this article is NOT a comparison between the HDR tools of these three applications. The results that can be achieved with either tool can be very different by simply adjusting one or two parameters. Your experience with each package, and even your personal preferences will play a role that will determine the effectiveness and quality of the results. Also remember, the "final" images I present here are not really final. All you see here is what comes out of the HDR composition without any further processing, and as I said at the end of each example, usually you would perform further processing to the image before it becomes really final.
Now... Those who know me know I don't have a Pixel Police attitude when it comes to personal processing preferences, and in a way, I understand why some imagers continue to choose relying on the hand-drawn lasso tool or mask-painted selective overlays despite they know about these and other HDR combination tools. For that reason, the object of this article is simply to show that doing a HDR combination using tools designed for this task produce excellent results and needs not to be intimidating - quite the opposite, these tools are very easy to use, and as hopefully I have shown, in some cases with just a few clicks you can produce even better results than the old manual approaches.
I personally don't see anything wrong if you choose to continue using "old tricks" techniques, but having said that, I believe that getting to know your favorite HDR composition tool - and using it - is going to help you in the long run to deal with these situations in a methodical, productive and more efficient manner.
You have seen above how, in barely a couple of clicks, I was able to produce a perfectly acceptable HDR composition with PixInsight, which goes a long way from layering two images, selecting an area or painting a mask, blurring it, feathering it, blending it, readjusting the histograms, perhaps doing a touch-up here and there, etc.
Now... If you decide to use HDR composition tools to build HDR compositions (sounds logical, doesn't it?) then great! If not, that's fine, but should you ever find yourself giving advice to a novice, my recommendation would be to do your part in not limiting yourself to just perpetuating the "old tricks" knowing there are sophisticated tools for the job, and at the very least, also let newcomers to this discipline know that nowadays there are tools designed for this task that not only unleash the imager's creativity in more efficient and productive ways, but often times also do so by producing better results.
Of course, if you do that by pointing them to this article, even better :-)
Previous posts in Blog
- Sleepy astrophotography (January 2nd, 2011)
- Orion, head to toe - portrait and landscape posters available (December 20th, 2010)
- What is astrophotography... (November 20th, 2010)
- First DeepSkyColors Poster now Available (October 7th, 2010)
- Milestones and Accolades (September 20th, 2010)
- Pinar de Araceli Star Party 2010 (August 21st, 2010)
- CreativeCommons Required Attribution line (May 27th, 2010)
- 2010 Dark Sky Times Calendar (May 19th, 2010)
- Bay Area Local Clear Sky Charts (May 8th, 2010)
- Thank you, Sunnyvale... not! (April 23rd, 2010)
- Color and Saturation (April 21st, 2010)
- Pretty pictures (February 9th, 2010)
- The photographer of the dust (January 25th, 2010)
- Astrophotography and conventional photography (January 16th, 2010)
- New area: The DeepSkyColors Blog (January 9th, 2010)