RBA & AF Astrophotography

Weather links

Posted: February 7th, 2011

If you live somewhere around the San Fransico Bay Area, these weather links may be helpful to you. They are to me!

Satellite Images and Models

Clear Sky Charts

NOA Weather Forecast

When a star party approaches, I make sure the weather's going to be good, before taking the hike! Here's a bunch of weather links for the start parties I usually attend...

Lake San Antonio (Calstar)

GSSP (Golden State Star Party)

CNSP (Central Nevada Star party)

We don't want these to happen, but...
Unfortunately, sometime, they do happen, and while clear skies is often the least of concerns for everybody, it's not a bad thing to be informed...

Sunnyvale weather
Very local stuff is sometimes important, for me at least.

NGC 2170 widefield

Posted: February 1st, 2011

Click here for a larger version

This field from the constellation Monoceros, featuring the bright nebula NGC 2170 in the middle, is an incredible mix of nebula types: reflection nebula (the blue areas), emission nebula (reddish) and dark nebula (black), all bathed in scattered dust around the area.One challenge when imaging this field is that the geostationary superhighway crosses right in the middle, which in this case, it created an average of 4 to 6 satellite strikes on each of the subframes I took. Fortunately, using a sigma clipping combine to stack the different subframes was able to get rid of each and every strike in the final image.

Get a poster, t-shirt, mug, mousepad... with this image!

HDR Composition with PixInsight

Posted: January 19th, 2011

As I anticipated in my previous article, I'm going to explain one easy way to generate an HDR composition with the HDRComposition tool in PixInsight.

The data

The data is the same I used in my previous article, plus some color data to make it "pretty":
  • Luminance: 6x5 minutes + 6x 30 seconds (33 minutes)
  • RGB: 3x3 minutes each channel, binned 2x2 (27 minutes)
The two luminance sets is where we'll be doing the HDR composition. As I also mention in my last article, this is clearly NOT high quality data. I only spent about one hour capturing all the data (both luminance sets and all the RGB data) in the middle of one imaging session, just for the purpose of writing these articles.

Preparing the luminance images

Of course, before we can integrate the two luminance images, all the subframes for each image need to be registered/calibrated/stacked, and once we have the two master luminance images, we should remove gradients, and register them so they align nicely. The calibration/stacking process can be done with the ImageCalibration and ImageIntegration modules in PixInsight, the registration can easily be done with the StarAlignment tool, and the gradient removal (don't forget to crop "bad" edges first, due to dithering or misalignment) with the DBE tool.

Doing the HDR composition

Now that we have our two master luminance images nicely aligned, let's get to the bottom of it. HDRComposition works really well with linear images. In fact, if you feed it linear images, it will also return a linear image - a very useful feature, as you can create the HDR composition and then start processing the image as if the already composed HDR image is what came out of your calibration steps. The first step then, once we have our set of images with different exposures nicely registered (just two in this example), is to add them to the list of Input Images:

With that done, we simply apply (click on the round blue sphere), and we're done creating the HDR composition. Well, not exactly, but almost. Considering all we've done is to open the HDRComposition tool, feed it the files and click "Apply", that's pretty amazing!

You could tweak the parameters, but really, the only things to adjust would be the parameters that define the mask creation: threshold, smoothness and growth, and as we shall see, the default values already work pretty well.

Of course, since we've integrated two linear images, our resulting image is also linear, which can be very handy if we would like to do some processes that work much better with linear images, such as deconvolution, etc. Regardless, being still a linear image, it appears really dark in our screen.

Now, being this such marginal data, the SNR of this image doesn't grant for a successful deconvolution. We can examine the noise with PixInsight in different ways, but in this case, a simple histogram stretch (or by using the ScreenTransferFunction) is already good enough to "see" it...  Therefore, we will skip the deconvolution and just do a basic histogram stretch, so we can see what's in there:

Horror! The core of M42 is, again, saturated!! Didn't we just do the HDRComposition to avoid this very problem??

Not so fast... The HDRComposition tool has actually created a 64 bit float image (we can change that, but it's the default and as you should remember, we used the default values). That's a dynamic range so huge - we're talking about 180,000,000,000,000,000,000 (if I've got this right) possible discrete values! - that it's almost impossible to comprehend (some might argue it's so huge it's unnecessary), and definitely, quite hard to represent on a monitor that likely cannot represent data at a depth larger than 8 bit (if this last sentence confuses you, please do a search on the net to find how monitors handle color depth - discussing that here is clearly out of the scope of this article). So the data is actually there, it's just that the dynamic range of the image is defined in such a large array, our screen cannot make up the difference between values that are "relatively close" - that is, unless we work on it a bit more.

The HDRWT tool (HDR Wavelets Transform) comes to the rescue! What the HDRWT tool does is applying dynamic range compression on a multi-scale fashion, by applying it only to specific structures at a selectable scale, leaving untouched the data at other scales.

So, again using the default parameters but of the HDRWT tool instead (it just so happens that they work really well for this image), we apply it to the image. Now, the little details in the core of M42 finally become visible (don't worry if it's hard to see in the screenshot below, we'll zoom in later):

This actually puts an end to what the actual HDR composition would be. Yup, we're done with that. There's really nothing more to it.

Notice how the actual HDR composition was made by three incredibly simple steps:
  • Selecting the two images (or as many as we have) we want to combine and applying the default parameters of the HDRComposition.
  • Doing a basic histogram stretch.
  • Applying the default parameters of the HDRWT tool.
If we were to compare this with the myriad of possible scenarios one would need to preview if we were using HDR-specific programs such as Photomatix, or the rather unnerving tweaking we might be doing if we were using the "Merge to HDR" tool in Photoshop, or if you were to use the "old trick" of actually doing things manually, having to layer up the two (or three or four) images, making selections, feathering them (or painting a blurred mask), fixing/touching-up edge transitions between the frames, setting the blending option and readjusting the histogram (levels) to make things match nicely... you can see how much we have simplified the process, yet obtained a pretty good result!

Why would you want to do it any other way? ;-)

Taking a closer look

Once we're reached this stage, we can try to improve the results even more, depending on our particular goals. We're going to do this by doing a new histogram stretch, then applying the HDRWT tool once again. Here's our image after the histogram stretch:

Now, let's zoom in on the core of M42 to better see here what's going on (of course, if you were doing this processing session, you could zoom in and out anytime). Here we can see that the core is fairly well resolved. Some people like it this way: bright "as it should be" they'd say. And you can't argue that! (NOTE: there's some posterization in the screen-shot below - the detail in the core actually showed up nicely on my screen).

So yeah... We could be very happy with that, but let's go for an even more dramatic look by compressing the dynamic range once again, multi-scale style, with the HDRWT tool. We could try selecting a smaller number of layers for an even more dramatic look, but the default parameters continue to produce a very nice result, and since we want to keep the processing simple, that's what we'll do, and use the default parameters again. This is what we get:

You may or may not like this result more than the previous one. Again, personal style, preferences and goals are what dictate our steps at this point, now that we have resolved the dynamic range problem.

The noise!

This is a good time to apply some noise reduction. We could have also done it earlier, before applying the last HDRWT. A quick noise analysis tells us that noise seems to be strong at small and very small scales, so we would apply the ACDNR tool first attacking the very small scale noise (StdDev value of 1.0), then a second pass to go for the rest, say with a StdDev value of 3.0, and then, why not, readjusting the histogram once again if we like.

Adding color

Although the exercise of solving the HDR composition was already covered in the very first section, since we've gone this far, let's just go and complete the image, adding some pretty colors to it.

For this purpose, I also acquired some very marginal color data but that it should suffice for this example. The data is just 3 subexposures of 3 minutes each, for each color channel (R, G and B), all binned 2x2.

I'm going to skip the step-by-step details in preparing the RGB image, but just for the record, this is what was done:
After all this was done, we end up with the gray-scale image we have been working on so far, and a nice color image. We're now ready to do the LRGB integration:

Although PixInsight offers a nice tool to adjust the lightness prior to integrate it with color data (LinearFit), in this case, the difference between both images is rather large, so I opted for skipping the LinearFit step, and manually adjust the Lightness parameter in the LRGBCombination tool. With that done and the LRGB already combined, I added a bit of a gradual saturation with the ColorSaturation tool:

We're almost done. The image looks very nice - especially considering how marginal the data is - but my personal taste tells me that I would rather it to have a bit more contrast. To do that, I use the DarkStructureEnhance script, along with another histogram adjustment, and with that done, I call it a day. Here's the image, reduced in size so that it fits in this page:

And here's a closeup of the core of M42 at the original scale:

As stated earlier, this is rather marginal data, so don't expect Hubble quality here! And of course, the resolution isn't astonishing either because I used a FSQ106EDX telescope with a focal reducer (385mm focal length) and a STL11000, which combined, give an approximate resolution of 4.8 arcsec/pixel. Then again, the purpose of this article is not to produce a great image, but to highlight how easy it is to deal with this particular high dynamic range problem with the HDRComposition tool in PixInsight.

If you've been dealing with this problem by using the "old trick" of layering the images and "uncovering" one over the other via lasso selections or painted masks, I don't know if this brief article will persuade you to try doing it differently next time, but at the very least, if you decide to stick to the "old ways", I hope you at least remember that, well, there is a much much easier way ;-)

Hope you liked it!

HDR Composition for astronomical images

Posted: January 16th, 2011

A few days ago I wrote an article about my thoughts on HDR compositions for astro images, and why I felt that astrophotographers should take advantage of HDR composition tools when confronting certain dynamic range problems, rather than relying on hand-drawn selective overlays, so I figured a good complement to that article would be a few examples of using these techniques in action.

We can find dynamic range problems in all kinds of images, but we tend to associate dynamic range problems to situations with images that contain areas that "burn" easily. For that reason, for this article I will use the most emblematic object in the sky that comes to mind when one has to solve this particular problem: M42.

The data

To keep things as simple as possible, I will use only two images: one whose exposure time was 5 minutes, and another one with just 10 seconds of exposure. Of course, each image has been constructed from a number of subframes (6 each to be exact) and has previously been calibrated and gradient-corrected. This is not high quality data - to be honest I quickly acquired it during one session just for the purpose of writing this article - but it should be good enough to illustrate these examples. For better results, I would recommend using at least three different subsets - the exposure time would vary depending on your camera, optics and sky conditions, but a good base would be a subset of 10-15 minutes exposures, another subset of 3-5 minutes, and a third set of just 10 to 30 seconds.

Back to our set of two images, by doing a basic non-linear stretch to each image, we can reveal what's in each of the images:

As you can see, the 5 minutes image contains a lot more information in the outer areas of the nebula, while the 10 seconds image barely has any information in the same area. Likewise, the core of the M42 nebula appears completely saturated in the 5 minutes exposure due to a limitation in the available dynamic range, while the 5 seconds exposure does show most information in the same area.

Since I have shrank the images in order to make them fit in this page, here's a closeup of the core of M42:

Now I will use these two images and perform an HDR composition using three different packages:

1) Photoshop, if anything because today it is still the most widely used image processing software for astroimages.
2) Photomatix, mainly to see how an application especially designed to perform HDR compositions can be used for HDR compositions of astronomical images.
3) PixInsight, for being possibly the best software today entirely developed towards astroimage processing.

Note that the purpose of this article is not to see which of these packages does a better job, but I will talk about that later.


The latest versions of Photoshop come with a "Merge to HDR" tool. Although I admit that this tool is not my favorite to do an HDR composition for a number of reasons, it is rather easy to use.

One of the most important limitations of Photoshop's HDR tool is that it only accepts 16 bit images, so the first step is to reduce the bit depth of our images from 32 bit float to 16 bits. Of course this is not ideal, but he have no choice.

Now I execute the "Merge to HDR" tool, which in CS5 is accesible by selecting the File menu option, and Automate submenu. Usually you will be asked to set the EV (exposure value) manually. You can either accept the default calculated values, or enter your own. Usually an EV spacing of 1 to 6 should suffice.

Once you OK the "Manually Set EV" dialog box, you'll be presented with the "Merge to HDR" tool. Here's where you do most of the work of making sure your image looks the way you want. Make sure the Mode is set to Local Adaptation and adjust the parameters. For a composition where the problem is mainly in the highlights, you will want to compress the highlights (set a small value for the Highlight option), leave the Gamma and Exposure values alone (or make very minor adjustments), and depending on your preferences, compress or leave alone the Shadows value. All remaining values (Details and Edge Glow parameters) can be adjusted to your own liking.

If you're still not quite satisfied with the results, you can adjust the histogram curves (notice the Curve tab). If the merged image just doesn't look right no matter what you do, you may want to go back and use different EV values for each image.

When you're happy with what you see in the preview, hit OK. Here's the resulting image I obtained, without any further processing. As you can see in the resulting image, I did not work the background at all, but that was a personal choice. Notice that the "structural" detail that you see in the image is not caused by the HDR composition per-se, but by the edge enhancement and sharpening tools conveniently included in the Merge to HDR dialog box. Of course, at this point, you could (should) continue processing the image...


Photomatix is a more versatile tool than Photoshop for HDR compositions - it is after all a software designed for this task. It is also a software I'm not particularly familiar with, and I'm certainly not an expert in using it. One thing is clear: it's not a software with astroimage processing in mind, not even astroimage HDR composition. I'm including it here because it's probably one of the most popular HDR composition applications out there, and I do think it's beneficial to see how such programs fair when it comes to use them with astroimages.

Photomatix seems to accept 32 bit images, although it takes a really unusual amount of time to load them, and in the tests I've run it could never interpret them well, so generally speaking you'd probably be feeding Photomatix 16-bit images only, just like in Photoshop, which is what I had to do in this case, again not being an ideal situation. Photomatix does actually create intermediate 32 bit images, which is cool, except for the fat that once you're done, you can only save them as 16 bit images (!!).

Just like with Photoshop, once you select the images you'd like to combine, Photomatix will ask you for the EV value, and again, you just have to make an educated guess. A value of 1 to 3 would work in a case like the one at hand. Do not ask Photomatix to show you the intermediary 32-bit HDR image.

After entering the EV value, Photomatix will offer you a few "processing options". I wouldn't use any of them, except for perhaps the noise reduction. Then, you're ready to adjust the HDR combination parameters...

Photomatix offers a number of presets, each of them with their own set of distinct parameters. My recommendation is to try the Exposure Fusion method first, and only if you don't get results that you like, try the Tone Mapping method. Since it's very easy to preview the different presets, just click away and adjust the parameters each method offers.

A word of caution: Photomatix can do really fancy stuff to your images. Although it does a good job creating a HDR composition, I suggest using it very gently. Keep your eye on the ball - you're not using Photomatix to process your image, but just to combine the different exposures. Get that done, and go fancy later with your usual image processing software. Here's just one of the many possible results that can be obtained with Photomatix with just a few clicks and a few slider adjustments:

The above image appears - to me at least - rather soft, so during post-processing I would probably apply some edge enhancement and other features to push the contrast of the image a bit further.


PixInsight is an astroimage processing application, so it usually lets us work the way we want with out images. To begin with , it allows us to work not only with 32 bit (int or float) images, but even with 64 bit float images. The HDRComposition tool in PixInsight can also work on linear images, and when doing so, it too will return a linear image, which is ideal to continue processing the image linearly after the HDR composition. It is, in fact, recommended to use it with linear images. However, for this article I will use the same set of two already stretched images, mainly to use the very same data on the three applications I'm using in this article, and later I'll write another article showing how to do the integration of these same images linearly with PixInsight's HDRComposition tool.

The HDRComposition tool in PixInsight is under Process > ImageRegistration. Please note that there's also an old script using the same name under Script > Utilities, but I strongly recommend using the module under ImageRegistration. Besides implementing a better scaling algorithm, the new module is much more robust, accurate, fast, and it does a lot of the thinking for you.

Just as with the previous examples, I will not go on detail about what each of the parameters and options do. Instead, I will simply comment on the adjustments I made for this particular case.

The HDRComposition tool in PixInsight doesn't ask us for exposure values. The tool itself does all the calculations to determine the weights

Leaving the highlight/lowlight limit parameters with their default values, I may adjust the binarizing threshold to an amount that seems to cover the overexposed areas well (I used the 0.80 default value this time), and increase the mask smoothness (15) and maybe the mask growth (not this time), to generate smoother transitions in the final composition. Leaving the "Generate a 64-bit HDR image" option activated is nice, although you'd probably also obtain good results by producing a 32-bit image. In the end, as you see, the only parameter I've adjusted is the mask smoothness, and even the default value would probably yield good results. Bottom line: once you've added the images that will be combined, doing a HDR composition in PixInsight can often be a one-click operation.

Right after the HDR combination, which produced the image in the picture above, I would usually run one HDRWT pass (HDR Wavelets Transform) to enhance the local contrast of structures in a multiscale fashion. The old script I mentioned at the beginning included this option in the HDRComposition dialog box, but this option was dropped in the newer module, and needs to be run separately. Needless to say, if you combined linear images, you should do first a non-linear histogram adjustment. Here I also used the default values - ok, now it's a two clicks operation :-) This is the "final" result I obtained from running HDRComposite and HDRWT:

As in the other two examples, you should probably continue processing the data to craft the final rendition of the image.


First of all, as I said earlier, I would like to make clear that this article is NOT a comparison between the HDR tools of these three applications. The results that can be achieved with either tool can be very different by simply adjusting one or two parameters. Your experience with each package, and even your personal preferences will play a role that will determine the effectiveness and quality of the results. Also remember, the "final" images I present here are not really final. All you see here is what comes out of the HDR composition without any further processing, and as I said at the end of each example, usually you would perform further processing to the image before it becomes really final.

Now... Those who know me know I don't have a Pixel Police attitude when it comes to personal processing preferences, and in a way, I understand why some imagers continue to choose relying on the hand-drawn lasso tool or mask-painted selective overlays despite they know about these and other HDR combination tools. For that reason, the object of this article is simply to show that doing a HDR combination using tools designed for this task produce excellent results and needs not to be intimidating - quite the opposite, these tools are very easy to use, and as hopefully I have shown, in some cases with just a few clicks you can produce even better results than the old manual approaches.

I personally don't see anything wrong if you choose to continue using "old tricks" techniques, but having said that, I believe that getting to know your favorite HDR composition tool - and using it - is going to help you in the long run to deal with these situations in a methodical, productive and more efficient manner.

You have seen above how, in barely a couple of clicks, I was able to produce a perfectly acceptable HDR composition with PixInsight, which goes a long way from layering two images, selecting an area or painting a mask, blurring it, feathering it, blending it, readjusting the histograms, perhaps doing a touch-up here and there, etc.

Now... If you decide to use HDR composition tools to build HDR compositions (sounds logical, doesn't it?) then great! If not, that's fine, but should you ever find yourself giving advice to a novice, my recommendation would be to do your part in not limiting yourself to just perpetuating the "old tricks" knowing there are sophisticated tools for the job, and at the very least, also let newcomers to this discipline know that nowadays there are tools designed for this task that not only unleash the imager's creativity in more efficient and productive ways, but often times also do so by producing better results.

Of course, if you do that by pointing them to this article, even better :-)

Sleepy astrophotography

Posted: January 2nd, 2011

Every now and then we see new images of M42... It's such an amazing - and difficult to image - nebula!

And more often than not, I've noticed that the famous dynamic range problem is solved by pasting and blending with a (painted) mask the short exposure that contains some detail in the trapezium area, with the longer exposure that reveals everything around it but a burned and saturated trapezium.

And I wonder... Why are they still doing it like that?

A while ago, when there weren't many (any?) HDR combination tools available, the trick of masking the short exposures of the trapezium and blending it with the longer exposure was slick, and it worked well.

Today however, there are plenty of HDR tools out there. Photoshop itself comes with one built in, there are plenty of plug-ins and standalone apps, and some of our favorite astroimage processing apps like PixInsight and others also come with them. And they do a pretty good job at resolving this well-known dynamic range problem..

Daylight photographers use HDR tools all the time. Even casual point-and-shoot photographers do - sometimes horribly but sometimes in really amazing ways. Why are they using them but we're not? I see astrophotography as one of the most complicated areas in photography, yet many astrophotographers still resort to the "old blending trick" just because "it works", instead of using these "new" techniques that now are available and that, I may add, are extremely easy to use.

Are we running so much behind?? Have we fallen asleep? In our field we really don't have many chances of applying these techniques because we seldom run in such strong dynamic range issue. Really we have M42 and just a few more objects that are ideal targets for this. But for that very reason we should be eager to use these tools in these very few chances we have, and show "them" (daylight photographers) that we know how to deal with extreme high dynamic range problems "properly".

Well, of course, this is not about showing anything to anyone. That's just a wake-up expression if you will.

In a way I think this is more than just a debate between using an HDR tool or resorting to copy/paste/blend. We shouldn't think everything has already been invented when it comes to astroimage processing and that the only way this hobby can advance is by means of the optics and especially CCD technology. Image processing can evolve as well, but if we're so slow in adopting something as simple as using an HDR tool (as I said earlier, it really doesn't take a lot of skills to use it, the tool does mot of the job), then it won't.

I won't tell experienced imagers that they need to change their ways. They have set the ground, and pioneered in very amazing ways. Whether they choose to adopt using "new" tools or different ways to process their images is a personal decision that must be respected. But at the very least, younger imagers (whether in age or simply new to the hobby) should look around, see the tools they now have at their disposal, and also try to be creative, just like the previous imagers were, back not too long ago with the tools they then had. Let's not just cook the recipes that have been written, cooked and eaten a hundred times. Let's write our own! Then someday, others too will pick up from where we left and continue pushing this discipline even further. That's exciting! Anything else is probably just dull and repetitive, and... what is fun about that?

Orion, head to toe - portrait and landscape posters available

Posted: December 20th, 2010

Due to popular demand, you can now purchase two different versions of the "Orion, Head to Toes" poster/print. One in landscape and one in portrait orientation!

To preview/purchase the landscape version, click here or on the landscape image above.
To preview/purchase the portrait version, click here or on the portrait image above.

NOTE: When you land on the preview/purchase page, do NOT panic when you see the $137 price tag!
Notice that this price is for the 78x51 inches poster (which is HUGE) and that smaller sizes cost a lot less - just the next smaller size at 52x34 inches (still pretty big) is already $100 LESS, at $37, and it only gets cheaper.

M78 versus LDN 1622

Posted: December 6th, 2010

From the APOD: Bright stars, clouds of dust and glowing nebulae decorate this cosmic scene, a skyscape just north of Orion's belt. Close to the plane of our Milky Way Galaxy, the wide field view spans about 5.5 degrees. Striking bluish M78, a reflection nebula, is at the left. M78's tint is due to dust preferentially reflecting the blue light of hot, young stars. In colorful contrast, the red sash of glowing hydrogen gas sweeping through the center is part of the region's faint but extensive emission nebula known as Barnard's Loop. At right, a dark dust cloud forms a prominent silhouette cataloged as LDN 1622. While M78 and the complex Barnard's Loop are some 1,500 light-years away, LDN 1622 is likely to be much closer, only about 500 light-years distant from our fair planet Earth.

Personal notes about the image will be added shortly...

What is astrophotography...

Posted: November 20th, 2010

When I see people lecturing about little tidbits, or simply voicing their opinion about this or that, regarding what astrophotography is and what is not, often times I feel as if they forgot (or simply do not know), that astrophotography is a lot more than taking pictures of celestial objects, and certainly more than a set of rules, regulations or ethics that one must or must not follow. At least for me, it is. I've said before that in astrophotography there are as many schools of thought as there are astrophotographers. Leaving aside all those regulatory considerations, this is also what astrophotography is for me today - and hopefully for a long time...

Astrophotography for me is...

...knowing weeks ahead when the Moon won't be up and bright during the wee hours of the night.
...checking the weather days ahead during the 10-14 days of the month when there's no Moon.
...checking it (the weather) 3-4 times a day in the morning when I'm hoping to go out that evening.
...chatting with others online about what sites they (and I) will be going tonight, tomorrow, or the day after.
...the excitement that builds up as I start to load all my equipment in my car any day that I'm going out.
...the drive to a dark site, whether it takes me 50 or 300 miles to go there.
...arriving at the dark site, getting out of the car and stretching after the long drive. Yes, astrophotography is also that, for me.
...saying hello to other friends, when the night will be spent in good company, or just thinking "ok, another night alone" when I know nobody else will be coming.
...starting to set up all the gear, paying attention to all details.
...turning everything on - we are about to start!
...looking up. Enjoying the night sky in all its glory, feeling small, and getting ready to steal a bit of the Universe and take it home.
...sometimes (fortunately not too many), waiting for some high clouds to go away.
...slewing the scope to the target, framing, focusing, finding a guide star.
...of course, start capturing data.
...talking "shop" with other friends when there's company, during those long nights.
...sometimes, stealing a view here and there from someone doing visual, while my equipment is capturing photons.
...checking that first frame, and the one after, and maybe all of them, as they come.
...making adjustments during a session, refocusing, readjusting guiding... So many things, and so many can go wrong!
...maybe trying to take a short nap inside the car, especially if I'm very tired and I know I should rest a bit in order to get home back safely once I'm done.
...drinking a lot of coffee during a session. Or maybe hot cocoa during the winter nights.
...taking my flats at the end of the session, when all I really want at that time is to pack and go home.
...the tired drive back home, often around 4-5am.
...once at home, the next day, or a few days later, starting to calibrate, register and stack all the data.
...taking that first look at the data once it's been calibrated.
...thinking about the processing strategies to follow, in order to make the best out of the data.
...the entire "image processing" ordeal.
...hitting SAVE (and "Save as JPEG") that final image.
...sharing the image with other friends, listening to their feedback, appreciating their comments.
...doing the same with images from other friends.

All of that and a lot more is what astrophotography is for me today. Of course, I'm not implying that I don't consider astrophotography anything that happens to be less (or more) than that. This was my story, and everyone have theirs. Mine is just one more way to live astrophotography, not just doing it.

Polaris and the North Celestial Pole

Posted: November 10th, 2010

Click here for a larger version

Here's a 2x2 mosaic wide field of the North Celestial Pole, featuring one of the best friends of astrophotographers in the Northern Hemisphere: Polaris. In fact, for us nomadic imagers, Polaris is not only our friend, but at the beginning of each session, we get on our knees and what may seem as an imager doing polar alignment, we're in fact PRAYING to the Northern Star that the session goes well!

The image also features a copious amount of galactic cirrus (some of it displaying some very cool structures), one of the oldest known open clusters (NGC 188, at the bottom-middle), and Delta Um (middle right, the second star in Ursa Minor's tail)

If you'd like to see where the North Celestial Pole actually is, you can see it here:

The data was captured over the course of two nights next to the DARC Observatory under 21.3 mag skies (that's at the Zenith), average transparency and bad seeing, and the processing was roughly 75% PixInsight and 25% CS5. DARC is around 120 miles from my home, so that makes this a 480 miles image ;-) Not a lot of data (1h lum and 18m each color filter per frame) as I started the project when the Moon was already getting big and setting late.

The image is also a testimony of how nice the polar scope of the EM400 mount is, as that's the only method I used both nights to polar align (no drift, etc) and as many of you know, imaging near the pole requires a good polar alignment, but of course, this image is not near the pole but on the pole itself! The forgiving resolution of the FSQ does help, but still, not bad at all.

As a friend said, in this image "north is not up", "north is IN"! :-)

As always, I identify a number of "I shouldn't have done that" or "I should have done this that other way" during the processing, but overall, and considering how seldom this area has been photographed, I think it does it justice somehow as a display of how the area looks like, and I'm happy with the results.

Get a poster, t-shirt, mug, mousepad... with this image!

Cassiopeia, The W

Posted: November 6th, 2010

Click here for a larger version

Get a poster, t-shirt, mug, mousepad... with this image!

When I was a kid, the first constellation that called my attention wasn't Orion or the Big Dipper. It was Cassiopeia, the "W", and I would immediately go look for it and recognize it. Cassiopeia wasn't my early call into astronomy, but for a while it was the only reason for me to look up at the night sky from a light polluted city in southern Spain "Look, there's Cassiopeia!"... Well, maybe it was some sort of an early call...

This past week, during four different outings at three different sites and around 550 more miles in my SUV, I managed to capture this beautiful "starscape".

There's no better way to (hopefully) enjoy this image but at the largest resolution possible. And while the large image linked above is over 5600 pixels wide, it is still 1/2 of its original resolution, but I felt I had to reduce its size to avoid producing a JPEG over 12mb even at 55% quality (which is already quite degraded). The large image linked above weights almost 6mb (that's at 60% quality), so if you have a slow connection, be aware of that.

It's not a picture of some gorgeous and prominent celestial structures such as nebulae, galaxies, etc. but it's a very special image for me. I hope you enjoy it!

It may seem a simple image to capture and process, but processing was a bit challenging indeed. First, it's a 3x2 mosaic, so all the challenges associated with mosaics apply here - resolved with more or less fortune. Also, getting the subtle - but real - changes in background illumination took some work. Except for the darker areas, that are more prominent in part because of the "lack" of stars, you'll notice that areas with a brighter background don't really have more or less stars than other areas with a slight darker background, and pulling these background illumination differences with a swarm of stars in front can be tricky.

I find it's rather interesting to surf around the image looking for star clusters, and of course, there are plenty of them. Some people may feel that the Gamma Cas and Pacman nebulae could have been selectively processed to become more prominent, or perhaps more detailed, but although any field swarmed by stars can get in the way of other features and often times our goal is to give way to the dust or gas rather than the stars, I think it's obvious that the stars and nothing else are indeed the protagonist of this image.. Why let anything else steal the show?

Here's a small version showing the famous W asterism: