Posted: January 19th, 2011
As I anticipated in my previous article
, I'm going to explain one easy way to generate an HDR composition with the HDRComposition tool in PixInsight.The data
The data is the same I used in my previous article, plus some color data to make it "pretty":
- Luminance: 6x5 minutes + 6x 30 seconds (33 minutes)
- RGB: 3x3 minutes each channel, binned 2x2 (27 minutes)
The two luminance sets is where we'll be doing the HDR composition. As I also mention in my last article, this is clearly NOT high quality data. I only spent about one hour capturing all the data (both luminance sets and all the RGB data) in the middle of one imaging session, just for the purpose of writing these articles.Preparing the luminance images
Of course, before we can integrate the two luminance images, all the subframes for each image need to be registered/calibrated/stacked, and once we have the two master luminance images, we should remove gradients, and register them so they align nicely. The calibration/stacking process can be done with the ImageCalibration and ImageIntegration
modules in PixInsight, the registration can easily be done with the StarAlignmen
t tool, and the gradient removal (don't forget to crop "bad" edges first, due to dithering or misalignment) with the DBE
tool.Doing the HDR composition
Now that we have our two master luminance images nicely aligned, let's get to the bottom of it. HDRComposition works really well with linear images. In fact, if you feed it linear images, it will also return a linear image - a very useful feature, as you can create the HDR composition and then start
processing the image as if the already composed HDR image is what came out of your calibration steps. The first step then, once we have our set of images with different exposures nicely registered (just two in this example), is to add them to the list of Input Images
With that done, we simply apply (click on the round blue sphere), and we're done creating the HDR composition. Well, not exactly, but almost. Considering all we've done is to open the HDRComposition tool, feed it the files and click "Apply", that's pretty amazing!
You could tweak the parameters, but really, the only things to adjust would be the parameters that define the mask creation: threshold
, and as we shall see, the default values already work pretty well.
Of course, since we've integrated two linear images, our resulting image is also linear, which can be very handy if we would like to do some processes that work much better with linear images, such as deconvolution, etc. Regardless, being still a linear image, it appears really dark in our screen.
Now, being this such marginal data, the SNR of this image doesn't grant for a successful deconvolution. We can examine the noise with PixInsight in different ways, but in this case, a simple histogram stretch (or by using the ScreenTransferFunction
) is already good enough to "see" it... Therefore, we will skip the deconvolution and just do a basic histogram stretch, so we can see what's in there:
Horror! The core of M42 is, again, saturated!! Didn't we just do the HDRComposition to avoid this very problem??
Not so fast... The HDRComposition tool has actually created a 64 bit float image (we can change that, but it's the default and as you should remember, we used the default values). That's a dynamic range so huge - we're talking about 180,000,000,000,000,000,000 (if I've got this right) possible discrete values! - that it's almost impossible to comprehend (some might argue it's so huge it's unnecessary), and definitely, quite hard to represent on a monitor that likely cannot represent data at a depth larger than 8 bit (if this last sentence confuses you, please do a search on the net to find how monitors handle color depth - discussing that here is clearly out of the scope of this article). So the data is actually there, it's just that the dynamic range of the image is defined in such a large array, our screen cannot make up the difference between values that are "relatively close" - that is, unless we work on it a bit more.
The HDRWT tool (HDR Wavelets Transform
) comes to the rescue! What the HDRWT tool does is applying dynamic range compression on a multi-scale fashion, by applying it only to specific structures at a selectable scale, leaving untouched the data at other scales.
So, again using the default parameters but of the HDRWT tool instead (it just so happens that they work really well for this image), we apply it to the image. Now, the little details in the core of M42 finally become visible (don't worry if it's hard to see in the screenshot below, we'll zoom in later):
This actually puts an end to what the actual HDR composition would be. Yup, we're done with that. There's really nothing more to it.
Notice how the actual HDR composition was made by three incredibly simple steps:
- Selecting the two images (or as many as we have) we want to combine and applying the default parameters of the HDRComposition.
- Doing a basic histogram stretch.
- Applying the default parameters of the HDRWT tool.
If we were to compare this with the myriad of possible scenarios one would need to preview if we were using HDR-specific programs such as Photomatix, or the rather unnerving tweaking we might be doing if we were using the "Merge to HDR" tool in Photoshop, or if you were to use the "old trick" of actually doing things manually, having to layer up the two (or three or four) images, making selections, feathering them (or painting a blurred mask), fixing/touching-up edge transitions between the frames, setting the blending option and readjusting the histogram (levels) to make things match nicely... you can see how much we have simplified the process, yet obtained a pretty good result!
Why would you want to do it any other way? ;-)Taking a closer look
Once we're reached this stage, we can try to improve the results even more, depending on our particular goals. We're going to do this by doing a new histogram stretch, then applying the HDRWT tool once again. Here's our image after the histogram stretch:
Now, let's zoom in on the core of M42 to better see here what's going on (of course, if you were doing this processing session, you could zoom in and out anytime). Here we can see that the core is fairly well resolved. Some people like it this way: bright "as it should be" they'd say. And you can't argue that! (NOTE: there's some posterization in the screen-shot below - the detail in the core actually showed up nicely on my screen).
So yeah... We could be very happy with that, but let's go for an even more dramatic look by compressing the dynamic range once again, multi-scale style, with the HDRWT tool. We could try selecting a smaller number of layers for an even more dramatic look, but the default parameters continue to produce a very nice result, and since we want to keep the processing simple, that's what we'll do, and use the default parameters again. This is what we get:
You may or may not like this result more than the previous one. Again, personal style, preferences and goals are what dictate our steps at this point, now that we have resolved the dynamic range problem.The noise!
This is a good time to apply some noise reduction. We could have also done it earlier, before applying the last HDRWT. A quick noise analysis tells us that noise seems to be strong at small and very small scales, so we would apply the ACDNR tool
first attacking the very small scale noise (StdDev
value of 1.0), then a second pass to go for the rest, say with a StdDev
value of 3.0, and then, why not, readjusting the histogram once again if we like. Adding color
Although the exercise of solving the HDR composition was already covered in the very first section, since we've gone this far, let's just go and complete the image, adding some pretty colors to it.
For this purpose, I also acquired some very marginal color data but that it should suffice for this example. The data is just 3 subexposures of 3 minutes each, for each color channel (R, G and B), all binned 2x2.
I'm going to skip the step-by-step details in preparing the RGB image, but just for the record, this is what was done:
After all this was done, we end up with the gray-scale image we have been working on so far, and a nice color image. We're now ready to do the LRGB integration:
Although PixInsight offers a nice tool to adjust the lightness prior to integrate it with color data (LinearFit), in this case, the difference between both images is rather large, so I opted for skipping the LinearFit step, and manually adjust the Lightness
parameter in the LRGBCombination
tool. With that done and the LRGB already combined, I added a bit of a gradual saturation with the ColorSaturation tool:
We're almost done. The image looks very nice - especially considering how marginal the data is - but my personal taste tells me that I would rather it to have a bit more contrast. To do that, I use the DarkStructureEnhance script, along with another histogram adjustment, and with that done, I call it a day. Here's the image, reduced in size so that it fits in this page:
And here's a closeup of the core of M42 at the original scale:
As stated earlier, this is rather marginal data, so don't expect Hubble quality here! And of course, the resolution isn't astonishing either because I used a FSQ106EDX telescope with a focal reducer (385mm focal length) and a STL11000, which combined, give an approximate resolution of 4.8 arcsec/pixel. Then again, the purpose of this article is not to produce a great image, but to highlight how easy it is to deal with this particular high dynamic range problem with the HDRComposition tool in PixInsight.
If you've been dealing with this problem by using the "old trick" of layering the images and "uncovering" one over the other via lasso selections or painted masks, I don't know if this brief article will persuade you to try doing it differently next time, but at the very least, if you decide to stick to the "old ways", I hope you at least remember that, well, there is a much much easier way ;-)
Hope you liked it!
Posted: January 16th, 2011
A few days ago I wrote an article about my thoughts on HDR compositions for astro images, and why I felt that astrophotographers should take advantage of HDR composition tools when confronting certain dynamic range problems, rather than relying on hand-drawn selective overlays, so I figured a good complement to that article would be a few examples of using these techniques in action.
We can find dynamic range problems in all kinds of images, but we tend to associate dynamic range problems to situations with images that contain areas that "burn" easily. For that reason, for this article I will use the most emblematic object in the sky that comes to mind when one has to solve this particular problem: M42.
To keep things as simple as possible, I will use only two images: one whose exposure time was 5 minutes, and another one with just 10 seconds of exposure. Of course, each image has been constructed from a number of subframes (6 each to be exact) and has previously been calibrated and gradient-corrected. This is not high quality data - to be honest I quickly acquired it during one session just for the purpose of writing this article - but it should be good enough to illustrate these examples. For better results, I would recommend using at least three different subsets - the exposure time would vary depending on your camera, optics and sky conditions, but a good base would be a subset of 10-15 minutes exposures, another subset of 3-5 minutes, and a third set of just 10 to 30 seconds.
Back to our set of two images, by doing a basic non-linear stretch to each image, we can reveal what's in each of the images:
As you can see, the 5 minutes image contains a lot more information in the outer areas of the nebula, while the 10 seconds image barely has any information in the same area. Likewise, the core of the M42 nebula appears completely saturated in the 5 minutes exposure due to a limitation in the available dynamic range, while the 5 seconds exposure does show most information in the same area.
Since I have shrank the images in order to make them fit in this page, here's a closeup of the core of M42:
Now I will use these two images and perform an HDR composition using three different packages:
1) Photoshop, if anything because today it is still the most widely used image processing software for astroimages.
2) Photomatix, mainly to see how an application especially designed to perform HDR compositions can be used for HDR compositions of astronomical images.
3) PixInsight, for being possibly the best software today entirely developed towards astroimage processing.
Note that the purpose of this article is not to see which of these packages does a better job, but I will talk about that later.
The latest versions of Photoshop come with a "Merge to HDR" tool. Although I admit that this tool is not my favorite to do an HDR composition for a number of reasons, it is rather easy to use.
One of the most important limitations of Photoshop's HDR tool is that it only accepts 16 bit images, so the first step is to reduce the bit depth of our images from 32 bit float to 16 bits. Of course this is not ideal, but he have no choice.
Now I execute the "Merge to HDR" tool, which in CS5 is accesible by selecting the File menu option, and Automate submenu. Usually you will be asked to set the EV (exposure value) manually. You can either accept the default calculated values, or enter your own. Usually an EV spacing of 1 to 6 should suffice.
Once you OK the "Manually Set EV" dialog box, you'll be presented with the "Merge to HDR" tool. Here's where you do most of the work of making sure your image looks the way you want. Make sure the Mode is set to Local Adaptation and adjust the parameters. For a composition where the problem is mainly in the highlights, you will want to compress the highlights (set a small value for the Highlight option), leave the Gamma and Exposure values alone (or make very minor adjustments), and depending on your preferences, compress or leave alone the Shadows value. All remaining values (Details and Edge Glow parameters) can be adjusted to your own liking.
If you're still not quite satisfied with the results, you can adjust the histogram curves (notice the Curve tab). If the merged image just doesn't look right no matter what you do, you may want to go back and use different EV values for each image.
When you're happy with what you see in the preview, hit OK. Here's the resulting image I obtained, without any further processing. As you can see in the resulting image, I did not work the background at all, but that was a personal choice. Notice that the "structural" detail that you see in the image is not caused by the HDR composition per-se, but by the edge enhancement and sharpening tools conveniently included in the Merge to HDR dialog box. Of course, at this point, you could (should) continue processing the image...
Photomatix is a more versatile tool than Photoshop for HDR compositions - it is after all a software designed for this task. It is also a software I'm not particularly familiar with, and I'm certainly not an expert in using it. One thing is clear: it's not a software with astroimage processing in mind, not even astroimage HDR composition. I'm including it here because it's probably one of the most popular HDR composition applications out there, and I do think it's beneficial to see how such programs fair when it comes to use them with astroimages.
Photomatix seems to accept 32 bit images, although it takes a really unusual amount of time to load them, and in the tests I've run it could never interpret them well, so generally speaking you'd probably be feeding Photomatix 16-bit images only, just like in Photoshop, which is what I had to do in this case, again not being an ideal situation. Photomatix does actually create intermediate 32 bit images, which is cool, except for the fat that once you're done, you can only save them as 16 bit images (!!).
Just like with Photoshop, once you select the images you'd like to combine, Photomatix will ask you for the EV value, and again, you just have to make an educated guess. A value of 1 to 3 would work in a case like the one at hand. Do not ask Photomatix to show you the intermediary 32-bit HDR image.
After entering the EV value, Photomatix will offer you a few "processing options". I wouldn't use any of them, except for perhaps the noise reduction. Then, you're ready to adjust the HDR combination parameters...
Photomatix offers a number of presets, each of them with their own set of distinct parameters. My recommendation is to try the Exposure Fusion method first, and only if you don't get results that you like, try the Tone Mapping method. Since it's very easy to preview the different presets, just click away and adjust the parameters each method offers.
A word of caution: Photomatix can do really fancy stuff to your images. Although it does a good job creating a HDR composition, I suggest using it very gently. Keep your eye on the ball - you're not using Photomatix to process your image, but just to combine the different exposures. Get that done, and go fancy later with your usual image processing software. Here's just one of the many possible results that can be obtained with Photomatix with just a few clicks and a few slider adjustments:
The above image appears - to me at least - rather soft, so during post-processing I would probably apply some edge enhancement and other features to push the contrast of the image a bit further.
PixInsight is an astroimage processing application, so it usually lets us work the way we want with out images. To begin with , it allows us to work not only with 32 bit (int or float) images, but even with 64 bit float images. The HDRComposition tool in PixInsight can also work on linear images, and when doing so, it too will return a linear image, which is ideal to continue processing the image linearly after the HDR composition. It is, in fact, recommended to use it with linear images. However, for this article I will use the same set of two already stretched images, mainly to use the very same data on the three applications I'm using in this article, and later I'll write another article showing how to do the integration of these same images linearly with PixInsight's HDRComposition tool.
The HDRComposition tool in PixInsight is under Process > ImageRegistration. Please note that there's also an old script using the same name under Script > Utilities, but I strongly recommend using the module under ImageRegistration. Besides implementing a better scaling algorithm, the new module is much more robust, accurate, fast, and it does a lot of the thinking for you.
Just as with the previous examples, I will not go on detail about what each of the parameters and options do. Instead, I will simply comment on the adjustments I made for this particular case.
The HDRComposition tool in PixInsight doesn't ask us for exposure values. The tool itself does all the calculations to determine the weights
Leaving the highlight/lowlight limit parameters with their default values, I may adjust the binarizing threshold to an amount that seems to cover the overexposed areas well (I used the 0.80 default value this time), and increase the mask smoothness (15) and maybe the mask growth (not this time), to generate smoother transitions in the final composition. Leaving the "Generate a 64-bit HDR image" option activated is nice, although you'd probably also obtain good results by producing a 32-bit image. In the end, as you see, the only parameter I've adjusted is the mask smoothness, and even the default value would probably yield good results. Bottom line: once you've added the images that will be combined, doing a HDR composition in PixInsight can often be a one-click operation.
Right after the HDR combination, which produced the image in the picture above, I would usually run one HDRWT pass (HDR Wavelets Transform) to enhance the local contrast of structures in a multiscale fashion. The old script I mentioned at the beginning included this option in the HDRComposition dialog box, but this option was dropped in the newer module, and needs to be run separately. Needless to say, if you combined linear images, you should do first a non-linear histogram adjustment. Here I also used the default values - ok, now it's a two clicks operation :-) This is the "final" result I obtained from running HDRComposite and HDRWT:
As in the other two examples, you should probably continue processing the data to craft the final rendition of the image.
First of all, as I said earlier, I would like to make clear that this article is NOT a comparison between the HDR tools of these three applications. The results that can be achieved with either tool can be very different by simply adjusting one or two parameters. Your experience with each package, and even your personal preferences will play a role that will determine the effectiveness and quality of the results. Also remember, the "final" images I present here are not really final. All you see here is what comes out of the HDR composition without any further processing, and as I said at the end of each example, usually you would perform further processing to the image before it becomes really final.
Now... Those who know me know I don't have a Pixel Police attitude when it comes to personal processing preferences, and in a way, I understand why some imagers continue to choose relying on the hand-drawn lasso tool or mask-painted selective overlays despite they know about these and other HDR combination tools. For that reason, the object of this article is simply to show that doing a HDR combination using tools designed for this task produce excellent results and needs not to be intimidating - quite the opposite, these tools are very easy to use, and as hopefully I have shown, in some cases with just a few clicks you can produce even better results than the old manual approaches.
I personally don't see anything wrong if you choose to continue using "old tricks" techniques, but having said that, I believe that getting to know your favorite HDR composition tool - and using it - is going to help you in the long run to deal with these situations in a methodical, productive and more efficient manner.
You have seen above how, in barely a couple of clicks, I was able to produce a perfectly acceptable HDR composition with PixInsight, which goes a long way from layering two images, selecting an area or painting a mask, blurring it, feathering it, blending it, readjusting the histograms, perhaps doing a touch-up here and there, etc.
Now... If you decide to use HDR composition tools to build HDR compositions (sounds logical, doesn't it?) then great! If not, that's fine, but should you ever find yourself giving advice to a novice, my recommendation would be to do your part in not limiting yourself to just perpetuating the "old tricks" knowing there are sophisticated tools for the job, and at the very least, also let newcomers to this discipline know that nowadays there are tools designed for this task that not only unleash the imager's creativity in more efficient and productive ways, but often times also do so by producing better results.
Of course, if you do that by pointing them to this article, even better :-)
Posted: January 2nd, 2011
Every now and then we see new images of M42... It's such an amazing - and difficult to image - nebula!
And more often than not, I've noticed that the famous dynamic range problem is solved by pasting and blending with a (painted) mask the short exposure that contains some detail in the trapezium area, with the longer exposure that reveals everything around it but a burned and saturated trapezium.
And I wonder... Why are they still doing it like that?
A while ago, when there weren't many (any?) HDR combination tools available, the trick of masking the short exposures of the trapezium and blending it with the longer exposure was slick, and it worked well.
Today however, there are plenty of HDR tools out there. Photoshop itself comes with one built in, there are plenty of plug-ins and standalone apps, and some of our favorite astroimage processing apps like PixInsight and others also come with them. And they do a pretty good job at resolving this well-known dynamic range problem..
Daylight photographers use HDR tools all the time. Even casual point-and-shoot photographers do - sometimes horribly but sometimes in really amazing ways. Why are they using them but we're not? I see astrophotography as one of the most complicated areas in photography, yet many astrophotographers still resort to the "old blending trick" just because "it works", instead of using these "new" techniques that now are available and that, I may add, are extremely easy to use.
Are we running so much behind?? Have we fallen asleep? In our field we really don't have many chances of applying these techniques because we seldom run in such strong dynamic range issue. Really we have M42 and just a few more objects that are ideal targets for this. But for that very reason we should be eager to use these tools in these very few chances we have, and show "them" (daylight photographers) that we know how to deal with extreme high dynamic range problems "properly".
Well, of course, this is not about showing anything to anyone. That's just a wake-up expression if you will.
In a way I think this is more than just a debate between using an HDR tool or resorting to copy/paste/blend. We shouldn't think everything has already been invented when it comes to astroimage processing and that the only way this hobby can advance is by means of the optics and especially CCD technology. Image processing can evolve as well, but if we're so slow in adopting something as simple as using an HDR tool (as I said earlier, it really doesn't take a lot of skills to use it, the tool does mot of the job), then it won't.
I won't tell experienced imagers that they need to change their ways. They have set the ground, and pioneered in very amazing ways. Whether they choose to adopt using "new" tools or different ways to process their images is a personal decision that must be respected. But at the very least, younger imagers (whether in age or simply new to the hobby) should look around, see the tools they now have at their disposal, and also try to be creative, just like the previous imagers were, back not too long ago with the tools they then had. Let's not just cook the recipes that have been written, cooked and eaten a hundred times. Let's write our own! Then someday, others too will pick up from where we left and continue pushing this discipline even further. That's exciting! Anything else is probably just dull and repetitive, and... what is fun about that?