Follow DeepSkyColors
on Facebook

 

Join

Tutorials

Star size reduction via Morphological Transformations

Posted: September 8th, 2011

Intro

Every once in a while, when we are processing our data, in order to craft an image aimed at communicating or displaying something specific, we run into a "problem": the stars in the field are so conspicuous that either distract our attention from the structures behind them, or simply don't allow us to display clearly such structures.

When that happens, one of the solutions is what in simple terms is referred as "star size reduction".

Reducing the size of the stars in our images may sound a bit dramatic. Some people have even declared that such procedures are a sure way to produce fake images.

The truth is that, if we are trying to produce an image of aesthetic and/or documentary value, as long as we apply this star size reduction homogeneously and following a well established criteria - that is, if we "dim" all the stars that share certain characteristics, and such reduction is homogeneously applied to all of them - what we are doing is perfectly acceptable. And while these debates often ignite endless and repetitive discussions, my aim here is not to justify these methods but to show you one way to apply them. For those of us who find these methods perfectly acceptable in order to attain the goals we have set beforehand, there's nothing fake about applying star size reduction techniques via post-processing, and it can sometimes be an enhancement to our images, while preserving and sometimes even increasing documentary value.

Strategy

The choice we need to make is whether star size reduction is what we want for any given image, in order to achieve our goals, and if so, which type of stars need to be dimmed down: the ones that present a very large size in our image, the very dim ones, mid-sized stars...

Usually, very large stars don't need to be reduced in size. At most, they're creating a large glow around them, and if our goal is to show what's behind that glow, other techniques such as dynamic range compression may be more suitable for the task.

Mid-sized stars may be a target, although most commonly, when "stars get in the way" these are small to tiny stars in fields packed with thousands of them.

This tutorial will show you one way to reduce the size of small stars. Unfortunately, I started this exercise with an image that by itself really didn't need any star size reduction. Although this may sound counterproductive and it may not show very clearly the benefits of these techniques, the concepts utilized are perfectly acceptable and the image serves the purpose of showing how it can be done. Just for the sake of argument, at the end of the tutorial I will too present a before/after example of an image that does benefit from star size reduction techniques in order to achieve the goal of not letting the stars block what's behind them.

The Data

The tutorial is based on a set of 4 exposures of 30 minutes each of the Andromeda galaxy, at -10C temperature, with a SBIG STL11k camera and a Takahashi FSQ106EDX telescope. The data was captured at the DARC Observatory in California, on August 28th, 2011. We will be processing this data with PixInsight v1.7.

1 - Building the Star Mask

The very first step in star size reduction via Morphological Transformations (MT for short) is to build a proper star mask. This is to avoid the MT process to actually be applied to non-starry structures. Building the mask is crucial, as it will determine what stars - and in this particular example, also what areas of the stars - will be affected.

Here's a screen shot of part of the M31 image we're going to be working with:



As mentioned, this image does not really call for star size reduction, but nonetheless it presents an interesting situation. We're going to be building our star mask with PixInsight's StarMask tool, and the brightness of the core of the galaxy and surrounding areas may produce a lack of star detection around these areas. Further, we're attacking mid-sized to small stars in the field, but we do not want the young, blue stars that sparkle in the disk of the galaxy to be reduced at all, yet, such structures might be assumed to be what visually may appear as "small, tiny stars".

To solve the first problem, we need to create a duplicate of our image, and "dim" the bright areas of the galaxy without dimming the stars. This can be achieved in different ways, and in this case I have chosen to use the HDRWT tool in PixInsight, which effectively applies a dynamic range compression. Here's the result of applying a rather aggressive HDRWT to the duplicate image:



Now, the above image is better prepared to produce a suitable star mask. Depending on the case, one can apply an even more aggressive HDRWT by either increasing the number of iterations or even reapplying it several times.

It's time to adjust all parameters in the StarMask tool to produce a star mask the way we want it in all aspects. In this case I've chosen the following parameters:



First, I've assigned a value of 0.15 to the Threshold parameter. The threshold parameter is meant to isolate noise from valid structures, but because a higher value will discriminate smaller structures, rising the 0.1 default a bit may help us not only avoiding noise, but also excluding the tiniest of stars in our mask.

The Scale and and Small parameters help us define the type of stars we're after: small to mid-sized stars but not the very tiny ones. While the growth parameter is often times quite useful - it determines how much to increase the masking area - because later we've checked the Contours option,  in order to have a well-defined contour (more on that in a bit), it's better not to grow the masking area. Same thing with the Smoothness parameter. We don't want to smooth out the masking areas too much, just enough, so I reduced the default value of 16 all the way to 5.

Now, the reason I've checked the Contour option is because, whenever possible, I want the mask to only leave unprotected the contour of the stars, which is effectively the area where the "reduction" really takes place. While going for a standard mask is often just as good, I have experienced that going for the contour exclusively gives me better results overall.

Last, note the Midtones parameter, that has a default value of 0.5, has been reduced to 0.25. This simply helps the structure detection to "stretch up" the image - equivalent to moving the midtones in the histogram to the left - which usually results in more stars being detected, and the masks being a tad thicker, since the structures are brighter after pushing the midtones to the left.

NOTE: You should not take these StarMask parameters as a cooking recipe! The StarMask tool uses a multiscale algorithm to isolate significant image structures during the structure detection phase that is strongly dependent on large-scale features of the whole image. In plain English this means that for a given set of parameters, the results you can obtain with StarMask when applied to one image, may be quite different than what you may obtain with the very same set of parameters on a different image. So the key is to understand the effects that modifying these parameter can achieve, and adjust them until the resulting mask is what you were after, or at least close enough.

Once applied, here's the star mask we produced:



Since it's hard to see the details in the above image, here's a close-up at a 1:1 scale:



So we apply the mask and here's how it looks (3:1 scale)  when using red color as overlay to see what is being protected (red) and what's not (transparent):



2 - Defining and applying the Morphological Transformation

With the mask in place, we now can invoke and apply the MorphologicalTransformation tool. Here's a screen shot with the image after applying MT, and the MT dialog box displaying the parameters we used (I'll explain them right after the image):



In the MT tool, first I've chosen the Morphological Selection operator. Most people use the Erosion operator, and it is in fact a proper option, but I like using Morphological Selection because it acts as a blend between the erosion and dilation methods, where, with the Selection parameter you can define how much erosion and how much dilation you want to apply (more erosion the closer you are to a value of 0, and more dilation as you get closer to a value of 1). And while the use of a blend of erosion and dilation in stars usually isn't necessary - as neither are other operations that combine them, such as opening and closing - I like the smoother results that they often generate, versus applying erosion and nothing else.

Last, a round structuring element makes a lot of sense of course, and in this case I chose a 5x5 kernel size because the stars being targeted fit well within that kernel.

Once applied, our stars are now less "in your face" as they were before. As mentioned earlier, this particular  image doesn't really benefit from applying this procedure, but leaving that aside, hopefully you can see the difference.

The results we've obtained can be seen more clearly in this x5 zoomed-in before/after animation:



You can see that the very small stars are not nearly as discernible as they were, the small-to-midsized stars are not only "reduced" in size but also their profile is more round in appearance, and the very tiny, almost invisible stars, are untouched.

3 - Sharpening the results

We may consider that the Morphological Transformation process we've applied has been successful and stop right now. However, in practice, I often like to bring back some of the "life" in the stars that are now dimmer and reduced, but of course without bringing them to their previous state - otherwise I may just not do any MT at all to begin with!

This can be achieved in several different ways, and one of such that I use quite often is by defining another star mask that protects everything but the very small scales in the image, and then use wavelets to increase a bit the bias at those scales.

Building the mask this time is very easy. We simply use the ATrousWaveletTransform tool in PixInsight, deselect all scales except scale 1, increase the bias just a bit, and apply this to the duplicate image to which we applied the HDRWT earlier. Here's a screen shot doing just that:



And here's the mask we just created:



We apply the mask to our image, and then sharpen it only in those tiny "holes" defined by the mask, using the ATWT tool, by using the default values except for an increase in the bias at the smallest scale:



I used a bias of 3 in this case, which is A LOT!! In this case the mask was aggressive, and so the effect is greatly reduced, but you will need to dial the bias to a value you're happy with via experimentation. And here's the result:



4 - Before/After animations

The following animation is big, but it shows the image cycling from the original version, to the version after the MT, to the version after the ATWT:



Last, here's an animation that shows a large closeup of a single star as it was originally, and as it finally ended up after the MT and ATWT processes:



While the appearance of our image has changed considerably - ok, picture this effect on an image that was packed with stars, rather than this M31 shot - I think it's safe to say that the data manipulation involved didn't invent any new data around the stars. It simply "dimmed" the stars, and unlike what some people often thing of these methods, without fabricating artificial "nebula" data around them.

5 - Conclusions

If you have read all the way until here... Well.. You've read a lot! ;-)

As it often happens, as I aim at describing each step in details (rather than just saying "do this, then that, then this, you're done"), this tutorial may seem to be describing a rather lengthy process, but in reality it only involves a few simple steps:

  1. Create a star mask suitable for MT, which in this case it involved using the HDRWT and StarMask tools
  2. Apply MT to our image to "reduce" our stars
  3. Create a second star mask, this time with the ATWT tool
  4. Apply ATWT to "sharpen" the image a bit.

That's all.

6 - A more suitable example

Here is a quick before/after animation of an image probably better suited for star size reduction:



The above is an image of some very faint dust clouds in Lacerta, and it accumulated 7 hours of exposure under very dark skies (SQMs at 21.7 or above, at the zenith) with a 4" scope at f/3.65.

There is nothing wrong with the before image. It shows a field packed with stars to the point that it's nearly impossible to discern the faint clouds. That's just a reality. It just happens that there are a lot of stars in that field!

However, by applying the very same "star size reduction" technique described in this tutorial, we have an image that allows us to better see the dusty structures that before were so elusive, and so we have a better visualization of the nebula. The before image tells a story, but so does the after image - and both stories are concordant with facts revealing a reality.

Not only that, after the MT procedure, we can, if we like, continue post-processing the image to even better visualize the faint clouds. The after image not only has aesthetic value, but also documentary value.


HDR Composition with PixInsight

Posted: January 19th, 2011

As I anticipated in my previous article, I'm going to explain one easy way to generate an HDR composition with the HDRComposition tool in PixInsight.

The data

The data is the same I used in my previous article, plus some color data to make it "pretty":
  • Luminance: 6x5 minutes + 6x 30 seconds (33 minutes)
  • RGB: 3x3 minutes each channel, binned 2x2 (27 minutes)
The two luminance sets is where we'll be doing the HDR composition. As I also mention in my last article, this is clearly NOT high quality data. I only spent about one hour capturing all the data (both luminance sets and all the RGB data) in the middle of one imaging session, just for the purpose of writing these articles.

Preparing the luminance images

Of course, before we can integrate the two luminance images, all the subframes for each image need to be registered/calibrated/stacked, and once we have the two master luminance images, we should remove gradients, and register them so they align nicely. The calibration/stacking process can be done with the ImageCalibration and ImageIntegration modules in PixInsight, the registration can easily be done with the StarAlignment tool, and the gradient removal (don't forget to crop "bad" edges first, due to dithering or misalignment) with the DBE tool.

Doing the HDR composition

Now that we have our two master luminance images nicely aligned, let's get to the bottom of it. HDRComposition works really well with linear images. In fact, if you feed it linear images, it will also return a linear image - a very useful feature, as you can create the HDR composition and then start processing the image as if the already composed HDR image is what came out of your calibration steps. The first step then, once we have our set of images with different exposures nicely registered (just two in this example), is to add them to the list of Input Images:



With that done, we simply apply (click on the round blue sphere), and we're done creating the HDR composition. Well, not exactly, but almost. Considering all we've done is to open the HDRComposition tool, feed it the files and click "Apply", that's pretty amazing!

You could tweak the parameters, but really, the only things to adjust would be the parameters that define the mask creation: threshold, smoothness and growth, and as we shall see, the default values already work pretty well.

Of course, since we've integrated two linear images, our resulting image is also linear, which can be very handy if we would like to do some processes that work much better with linear images, such as deconvolution, etc. Regardless, being still a linear image, it appears really dark in our screen.



Now, being this such marginal data, the SNR of this image doesn't grant for a successful deconvolution. We can examine the noise with PixInsight in different ways, but in this case, a simple histogram stretch (or by using the ScreenTransferFunction) is already good enough to "see" it...  Therefore, we will skip the deconvolution and just do a basic histogram stretch, so we can see what's in there:



Horror! The core of M42 is, again, saturated!! Didn't we just do the HDRComposition to avoid this very problem??

Not so fast... The HDRComposition tool has actually created a 64 bit float image (we can change that, but it's the default and as you should remember, we used the default values). That's a dynamic range so huge - we're talking about 180,000,000,000,000,000,000 (if I've got this right) possible discrete values! - that it's almost impossible to comprehend (some might argue it's so huge it's unnecessary), and definitely, quite hard to represent on a monitor that likely cannot represent data at a depth larger than 8 bit (if this last sentence confuses you, please do a search on the net to find how monitors handle color depth - discussing that here is clearly out of the scope of this article). So the data is actually there, it's just that the dynamic range of the image is defined in such a large array, our screen cannot make up the difference between values that are "relatively close" - that is, unless we work on it a bit more.

The HDRWT tool (HDR Wavelets Transform) comes to the rescue! What the HDRWT tool does is applying dynamic range compression on a multi-scale fashion, by applying it only to specific structures at a selectable scale, leaving untouched the data at other scales.

So, again using the default parameters but of the HDRWT tool instead (it just so happens that they work really well for this image), we apply it to the image. Now, the little details in the core of M42 finally become visible (don't worry if it's hard to see in the screenshot below, we'll zoom in later):



This actually puts an end to what the actual HDR composition would be. Yup, we're done with that. There's really nothing more to it.

Notice how the actual HDR composition was made by three incredibly simple steps:
  • Selecting the two images (or as many as we have) we want to combine and applying the default parameters of the HDRComposition.
  • Doing a basic histogram stretch.
  • Applying the default parameters of the HDRWT tool.
If we were to compare this with the myriad of possible scenarios one would need to preview if we were using HDR-specific programs such as Photomatix, or the rather unnerving tweaking we might be doing if we were using the "Merge to HDR" tool in Photoshop, or if you were to use the "old trick" of actually doing things manually, having to layer up the two (or three or four) images, making selections, feathering them (or painting a blurred mask), fixing/touching-up edge transitions between the frames, setting the blending option and readjusting the histogram (levels) to make things match nicely... you can see how much we have simplified the process, yet obtained a pretty good result!

Why would you want to do it any other way? ;-)

Taking a closer look

Once we're reached this stage, we can try to improve the results even more, depending on our particular goals. We're going to do this by doing a new histogram stretch, then applying the HDRWT tool once again. Here's our image after the histogram stretch:



Now, let's zoom in on the core of M42 to better see here what's going on (of course, if you were doing this processing session, you could zoom in and out anytime). Here we can see that the core is fairly well resolved. Some people like it this way: bright "as it should be" they'd say. And you can't argue that! (NOTE: there's some posterization in the screen-shot below - the detail in the core actually showed up nicely on my screen).



So yeah... We could be very happy with that, but let's go for an even more dramatic look by compressing the dynamic range once again, multi-scale style, with the HDRWT tool. We could try selecting a smaller number of layers for an even more dramatic look, but the default parameters continue to produce a very nice result, and since we want to keep the processing simple, that's what we'll do, and use the default parameters again. This is what we get:



You may or may not like this result more than the previous one. Again, personal style, preferences and goals are what dictate our steps at this point, now that we have resolved the dynamic range problem.

The noise!

This is a good time to apply some noise reduction. We could have also done it earlier, before applying the last HDRWT. A quick noise analysis tells us that noise seems to be strong at small and very small scales, so we would apply the ACDNR tool first attacking the very small scale noise (StdDev value of 1.0), then a second pass to go for the rest, say with a StdDev value of 3.0, and then, why not, readjusting the histogram once again if we like.




Adding color

Although the exercise of solving the HDR composition was already covered in the very first section, since we've gone this far, let's just go and complete the image, adding some pretty colors to it.

For this purpose, I also acquired some very marginal color data but that it should suffice for this example. The data is just 3 subexposures of 3 minutes each, for each color channel (R, G and B), all binned 2x2.

I'm going to skip the step-by-step details in preparing the RGB image, but just for the record, this is what was done:
After all this was done, we end up with the gray-scale image we have been working on so far, and a nice color image. We're now ready to do the LRGB integration:



Although PixInsight offers a nice tool to adjust the lightness prior to integrate it with color data (LinearFit), in this case, the difference between both images is rather large, so I opted for skipping the LinearFit step, and manually adjust the Lightness parameter in the LRGBCombination tool. With that done and the LRGB already combined, I added a bit of a gradual saturation with the ColorSaturation tool:



We're almost done. The image looks very nice - especially considering how marginal the data is - but my personal taste tells me that I would rather it to have a bit more contrast. To do that, I use the DarkStructureEnhance script, along with another histogram adjustment, and with that done, I call it a day. Here's the image, reduced in size so that it fits in this page:



And here's a closeup of the core of M42 at the original scale:



As stated earlier, this is rather marginal data, so don't expect Hubble quality here! And of course, the resolution isn't astonishing either because I used a FSQ106EDX telescope with a focal reducer (385mm focal length) and a STL11000, which combined, give an approximate resolution of 4.8 arcsec/pixel. Then again, the purpose of this article is not to produce a great image, but to highlight how easy it is to deal with this particular high dynamic range problem with the HDRComposition tool in PixInsight.

If you've been dealing with this problem by using the "old trick" of layering the images and "uncovering" one over the other via lasso selections or painted masks, I don't know if this brief article will persuade you to try doing it differently next time, but at the very least, if you decide to stick to the "old ways", I hope you at least remember that, well, there is a much much easier way ;-)

Hope you liked it!


Removing gradients while preserving very faint background details

Posted: May 28th, 2010

Intro

Gradients are often unavoidable, and they get worst the wider your field of view is. Of course, incredibly dark skies are a great help, but we don't always have that luxury.

The strategy you choose to deal with gradients will depend on the image, how severe the gradients are, and your goals. This tutorial shows you how I deal with gradients with images that also contain very faint details in the background that I want to preserve.

Many people attack gradients somewhere in the middle of the processing of an image. While the results obtained by doing this can be ok, my experience is that it's a lot better to deal with the gradients at the very beginning.

There are many reasons why dealing with gradients in the middle of the processing is just not a good idea:  stretching an image with gradients is not desirable because our histogram isn't true to the data we want to process - the histogram is including the gradient data, which means we're looking at a histogram that is not a good representation of our data, and why would we want to carry the gradient data with us as we process the image and deal with it "later"? Also, color-balancing an image with gradients can be misleading, so again, ideally we want the gradients corrected before we color-balance our image, something that is better to do when the image is still linear - which also means our gradient corrections must not break the linearity of our image, they have to be done early in the process, etc. In short, gradients have been added to our image and they're unwanted, so before processing our image we should get rid of them first, and once that's done, proceeded as "usual".

The most effective way of removing gradients (other than perhaps super-flats?) is by creating a background model defining the gradient and subtracting it from our image. We subtract it because gradients are an additive effect. By doing this, we can remove the gradients and be left with an image that is still in linear form, so we can then start processing it just like we would with any image - but without gradients!

The Data

The tutorial is based on a set of luminance I captured for my M10, M12 and galactic cirrus image.

The captured data was of Ok quality, but as you will see, it wasn't free of gradients. The master file was generated from 7 subexposures of 15 minutes each at -20C temperature, with a SBIG STL11k camera and a Takahashi FSQ106EDX telescope with the 0.7x reducer. The data was captured at DeepSky Ranch, California, between 1:45am and 3:30am on May 6th, 2010.

The Goal

The goal in this session is to remove the gradients from the luminance, but at the same time trying to preserve some very VERY dim galactic cirrus that populates that area.

1 - Preliminary work

We skip the process of calibrating and registering the 7 subframes. This means we already have a "master" luminance file. We open it in PixInsight:

As usual with most astroimages, there isn't much to see.

We want to see what's in there, so we use the "Auto" function from the ScreenTransferFunction tool (STF from now on) to perform a strong "screen stretch", that is, a strong stretch of the image as presented on the screen but without actually modifying any data in the image.

Now we can see what's really there, and the most noticeable thing (besides a "bad column" that wasn't corrected during calibration) is a gradient that presents our image rather bright on the left area, and much darker on the right. Our two globular clusters show nice and clear, which is nice.

If we pay very close attention, we also notice that there is in fact some data of the galactic cirrus in the image. We can tell the galactic cirrus from the gradient, because the gradient is a uniform and gradual effect across the image, but we can also see some structural differences. Yes, they're hard to see, but with some experience (and by looking at the actual image, not at a reduced screenshot) is quite noticeable. One thing is sure, we won't be able to do anything with that galactic cirrus unless we get rid of the gradient, and fast!

2 - Creating our background model

The first thing I do to create an acceptable background model is to create a duplicate of the image, and apply a non-linear histogram stretch. I do this because it will then be easier to place the samples used to create a background model based on this gradient.

Our gradient is back, indeed. In this histogram stretch we can also better see some of the galactic cirrus data that we do NOT want to remove.

Now we use the DynamicBackgroundExtraction tool (DBE from now own) to construct the background model.

Although I am not going to use the DBE symmetry function, I like to set the symmetry point at a place that seems to make sense to me, all the way to the right of the image in this case.

VERY IMPORTANT: When placing the samples that later will be used to create the background model, first I will place just a few of them, at an average of perhaps just 5-8 across. The reason is twofold. First, we're trying to model a gradient, and gradients are usually very smooth and gradual. If we were to place many samples, our model will start not to be as smooth and gradual as the gradient, as I will show you in a minute. The other reason is that we want to preserve the very subtle differences in the galactic cirrus data in the background, and we definitely do not want the samples to catch these differences.

In the screenshot above you can (barely) see the sample points in cyan blue. Later you'll be able to see them much better, so I'll talk about that then.

The next step is to SAVE this process as a "Process Icon". The reason is, we do not want to apply the DBE to this non-linear image. We have stretched this image to get a better feel of where the samples should go. By saving the process, we can then apply it to our real "good" image later.

Notice in the screenshot above the little icon next to the mouse cursor. We've created this process icon by dragging the "blue triangle" at the bottom-left of the DBE dialog over the workspace.

3 - Applying the background model

We can now close the DBE dialog and the stretched image. After doing that, we double-click on the process icon, and we "magically" apply to our "good" image the parameters we just defined earlier:

Here is when I can better show you where I've placed my samples. Notice I first auto-generated about 6 samples per row (equal spacing vertically) and then I've placed a few samples manually on the left area, where I noticed there was a very "short" darkening gradient, that our model background wouldn't pick unless we also set a few more samples there. Generally, in order to protect the very faint background structures, we should do an even more careful placement of the samples across the image, but as we shall see, even a more general sample placement can yield good results.

With that done, we're ready to apply the DBE. Notice (if you can read it!) that I have selected "Subtraction" as the correction formula. As I mentioned earlier, gradients are of additive nature, so in order to correct them, we must subtract the background model, rather than dividing it (which is what we'd do if we were dealing with vignetting, for example).

And here's our corrected image after subtracting the background model created by the DBE process:

Not much to see, right? Let's now apply a STF to both, our corrected image and the background model. This allows us to see whether the gradient has been successfully corrected as well as the shape of the background model we've used:

On the left you see the background model. Notice it's a smooth and gradual model, which is exactly what we wanted. If we were looking for complete perfection, we do notice that there's a few areas in the model that don't seem to be "just gradient corrections". In that case we would go back to the DBE tool, readjust the position of our samples, perhaps also readjusting some of the modeling parameters, using as a guide the background model we just created - which "tells" us where it didn't do a good job.

On the right you see the corrected image (with a very strong STF stretch). We do notice the background is not perfectly flat. Did we fail? Nope, that's the signal from the galactic cirrus! Other than that, we can tell the gradient is pretty much gone.

We disable the STF and our image is ready to be processed. Just as when we started but without the gradient:

4 - Verifying faint background signal

Although we are confident that the uneven background signal we saw in the STF'ed image after applying our background model is not the result of gradients, in part because we were able to see that the differences in background brightness of the "screen-stretched" image do not correspond with the subtle differences we notice in our background model, a reassuring check would be to compare a quick non-linear stretch of our image with the same area from the IRAS survey, Here's our image:

And here's the same area from the IRAS survey:

Obviously our image does not offer the same level of detail - in some cases it might, once processed, but certainly not at this point where all we're doing to our image is just a screen stretch - but we can see that the background areas that appear to be darker in our image roughly match the darker areas in the IRAS image. To get an even closer picture, we can apply some multi-scale processing techniques (MS) to our image so we can bring out even more clearly the background signal - for this purpose we have also applied some strong histogram adjustments (HS) over-highlighting bright areas and over-darkening darker background areas:

As I think it's clear, our bright background structures match pretty well with the structures from the IRAS image. Of course they do NOT match exactly, that's why when making that assesment, we must consider a couple of things:
  • We've just collected 7x15 minutes subexposures from a less than perfect sky, so the signal we've captured from the cirrus is not going to be as detailed as if we had imaged the Orion Nebula.

  • The quick multi-scale process we've applied usually blurs the large scale structures - our purpose at this point is to identify whether bright intensities in the background we have now  more-or-less match the bright areas from the survey image, and blurred structures are ok to make that assesment. You can tell by looking at the final processed image that when processed carefully, the background structures don't look exactly like the "quick blur" we just did.

  • The resolution and quality of the image we've obtained from the IRAS survey is also not ideal - areas that look very dark in the image may not be really absent from galactic cirrus for example. I have seen images taken with large telescopes of parts of the sky that display a large and dense amount of galactic cirrus where the images from the IRAS survey barely show any.

And of course, what really matters for the purpose of this tutorial is that the gradients we originally had in the image have no influence in this background signal, that we have been able to remove the gradients using an appropriate workflow, and that in the process we have not destroyed faint background details.

___

NOTE ADDED ON 5/16/2011: Teri Smoot processed WISE IR data of this area and created a video that overlaps her results with the final image used in this tutorial. Although both sets do not match 100%, I think the results are very interesting. You can see the animation here.
___

5 - A note about background samples

When creating a background model, some people tend to define a large number of samples with the idea that the more samples we place, the more accurate the background model will be.

The problem with this approach when removing gradients is that, if our image contains subtle variations in the background NOT caused by gradients, a background modeled after many samples will catch these variations, more so if the variations are not so subtle.

For example, look what happens in this case if we generate and apply a background model that was created from a large number of samples:

At the top-left is our original image with the sample marks (yup, that's a lot of samples!).

At the bottom-left is the background model generated. You can just tell by looking at this background model that it's not the right model to correct a gradient - unless you believe that a gradient could have that shape!

As a result, on the left is a screen stretch of the image after we've applied the background sample. Not only the background details we know this image has are pretty much gone, but also we can see some darkened areas - especially around bright stars or even the two clusters - that we can tell are processing artifacts. We've over-corrected our background.

So although there are cases where a large number of samples is justified, be careful when dealing with gradients, and always stretch your background model to make sure that it indeed shows the shape of what you'd expect from a gradient.

6 - Wrap up

As always, this tutorial takes a bit of time to read, but it really only involves four very very simple processing steps:

  • We screen-stretch our image to view the gradient
  • We histogram-stretch a duplicate of the image to place samples at the right places
  • We create and apply the background model
  • We verify that the model generated is appropriate and that our final image is properly corrected

For someone with some experience working with PixInsight, this session could be carried out in just 5 to 10 minutes.

As always, feel free to leave comments, questions, suggestions...

Multi-scale Processing - Revealing very faint stuff

Posted: May 7th, 2010

Intro

The purpose of this tutorial is to show how, using multi-scale techniques,we can bring in our image very very faint details that usually would go unseen,even in cases where we have successfully extracted very faint stuff already, of course without degrading anything else in our image.

The tutorial uses an already processed version of my IFN Wide Field 2x5 mosaic.

If you would like to see the screen-shots at a bigger resolution, simply click on the corresponding "tiny" screen-shot. A few screens-hots are actually shown in the page at the maximum resolution in which they were captured, though.

The Data

The captured data was of good quality, but not a lot of it. The master files were generated from 6 subexposures of 15 minutes each,and 3x3 minutes for each RGB channel, with a SBIG STL11k camera and a Takahashi FSQ106EDX telescope with the 0.7x reducer. The data was captured at Lake San Antonio, California over four nights - LSA is a dark site that sits right at a "gray" border in the Bortle scale, surrounded by blue and some green/yellow areas. For a light pollution map of Lake San Antonio, click here and look for its location on the bottom-right corner of the image.

The Goal

The goal in this session is trying to reveal very very faint data in regions of our image we suspect might contain such extremely faint data, as long as we decide that revealing such data makes sense from both an artistic and a documentary perspective.

1 - Preliminary work

The work detailed in this tutorial was performed over an image almost completely processed:

By observing the above image, we noticed that the area above the two arches (top-right) doesn't present any significant IFN structures. We wonder if there's any... For that we could simply stretch very aggressively the image, but because the image has gone through significant processing,we would get a more solid idea of whether there are IFN structures in that area by executing the aggressive stretch over an image in its early stages of processing, preferably in linear form. So we do just that and this is what we get:

Please ignore the still-uncorrected seams and other "defects", as this is done from an image almost in its very original form.

We can immediately see that in particular, there's a visible structure coming out off near where the two arches join - displaying a "3" shaped cloud. We therefore do have very faint structures in our image that are not appearing in our processed image.

Now, there is a risk in trying to process an image for very faint stuff after the image has been through a complete or almost complete processing cycle. The data in our processed image may not be nearly as reliable. As far as work-flow goes, this is NOT the best moment to do this. We should have noticed much earlier in the process and attack this area back then, not now. So this is an after-thought and it involves risks we would need to keep in mind. As long as we are aware of it and we perform the needed verifications in the end, my opinion is that we can proceed.

2 - Breaking the image into large and small scale structures

First, let's remember what we're looking at:

That's a pretty nice image. We have noticed however, that the area above the two arches in the top-right area is rather dark, and we have verified that there are IFN structures there. Our goal then is to see if we can extract that information without perturbing the already processed data, and if we can do so in a reliable manner.  The data is still in 32 float bit depth,which, despite our monitors can't display such large dynamic range, it should help us in our little quest.

We are going to follow a multi-scale approach, so our first step is to break the image into two (maybe three if we feel it's necessary) different scale structures: large scale structures, and small scale structures.

For that we use the ATrousWaveletTransform tool (ATWT) in PixInsight. We start by generating the image with the large-scale structures. This is accomplished by only selecting the residual layer ("R") in the ATWT tool, over a 4-6 layers dyadic sequence.

The above image may just look like a blurred version of the original. Well, in fact it is a "blurred" version of the original! The difference is that unlike a simple Gaussian blur,this image has been generated by decomposing it into a series of scale layers, each of which contains only structures within a given range of characteristic dimensional scales, and we have selected to retain in this image only the very large structures. A Gaussian blur attenuates high-frequency signal, so it is a low-pass filter. A wavelets operation using a Gaussian function is a series of low-pass filters, and the way we're using it, it will generate similar results but conceptually we can better decompose the image into different scales.

Let's build now the small-scale structures image. This is easily accomplished with PixInsight's PixelMath. All we do is subtract the image with large-scale structures from the original image, so what we have left from this operation is an image defining only the structures that were removed from the large-scale image, in other words, the small-scale structures. See below a screen-shot after this PixelMath operation has been performed, and notice how the image at the bottom only contains the small details from the original image. If this image looks similar to what you usually get when you apply a high-pass filter, you're of course right.

Now.. The image defining large-scale structures looks ok, but it is still defining some structures that we probably don't want to see there. For instance, there's clearly some glow from the brighter stars, and we want to minimize that. We want to see if we can reveal the faint dust, not star glow.

So we'll repeat the ATWT process once again, this time taking as the source our image with large scale structures, and create a new image with even larger scale structures.

See below the screen-shot displaying our three images:

From now on I'll refer to each image as follows: SS for the image defining the Small Scale structures (bottom-left in the screen-shot above), LS for the image defining Large Scale structures,and MS to the image defining even larger scale structures (Mega-large structures? :-)

3 - Processing large scale structures

Now that we've broken our image into three images, each of them defining different scale structures,we start doing some processing in them. We start by stretching the histogram to the ML image. Not too much, but enough to see if there is "stuff" in our targeted area.

And yes, the area that once was mainly dark, now reveals some structures - still faint, but now visible.

This is actually the epicenter of this processing session - whatever we do here is what will contribute to the enhancements we want to add to our image. We can stop right here after this stretch,or we could try many other things... We could increase the color saturation so the structures we're"revealing" also come with an increment in color visibility... We could apply some wavelets to enhance the structures we're bringing from the "darkness", we could use curves and/or PixelMath to intensify the contrast in these structures, we could even experiment with HDRWT...

And these improvements aren't limited to the MS image only. We could also add some sharpening to the SS image for example, or also experiment with any of the processes mentioned in the last paragraph. Likewise for the LS image.

This is not to say we should go crazy. We must keep our eyes on our goals and on what the data is"telling" us. My point is only that we have a plethora of tools we can use that may (or may not) help us achieve that goal, and that's part of the fun I find in image processing - that we can experiment with these tools to see which ones help us reach the goals we aim to achieve.

In this example I decided to just stay with this somewhat subtle histogram stretch, since that's all I want at this point.

Of course, after our stretch to the ML image, everything else is also stretched, and we don't want to bring all this back to our image (we could but it's not what we have decided we wanted to do during this session), so the next step is to create a luminance-based mask that will protect everything but the darkest areas in our image.

4 - Creating the luminance-based mask

To create our mask we start by extracting the luminance from the original image.

With that done, we do a slightly aggressive histogram stretch, then use the ATWT tool to reduce the image to large structures. We don't want the mask to be based on very very large structures, so in this case, excluding only the first five layers using a dyadic sequence works well. The reason we don't want to go further to say, 6, 7 or 8 layers is because, as we will see in a minute, it's not a bad idea to include in the mask some bright "spotting" caused by large stars. If we extracted only the very large scale structures, such "spots" would be gone, since they would fall under a"smaller scale" category. Also, our mask may not be protecting well our data.

Here's our mask after the histogram stretch and isolating large (but not very large) scale structures:

Now we need to adjust our mask to protect everything but the darkest areas.Luckily, this is easy to define with the binarization tool. Well, actually it's really not about luck. Since we're targeting only the areas with the darkest background, a threshold point can be found that isolates these areas from the rest.

In this case, such "darkest area" is clearly the one above the two IFN arches,but if we had other areas similarly dark in the image, it's ok. If that happens, we can decide whether to manually mask those other areas or leave them.If we leave them, all that would happen then is that we would be "bringing up to par"the fainter signal in those areas as well, which may be a very good thing depending on the case. If we choose to mask them out manually, that's also ok as long as we have a good reason, both from an artistic and a documentary point of view (more on that in a second).

After visually adjusting the threshold with the Binarization tool on our to-be mask, this is what we get:

Indeed, we have been left with the darkest regions in the image. Two things call our attention:
  • We haven't found a threshold point where only the area we're targeting is excluded.

    So we need to make a decision of whether we'll leave the mask as it is,meaning our stretch will affect those areas as well, or whether we choose to manually protect those areas (making them white, basically). By looking at the original image and at a heavily stretched view of our almost-unprocessed image we can see that those other "black islands" happen over areas that don't have any significant additional IFN data, so if we apply the additional stretch, it will not contribute to enhance very faint details,and instead, rather than "extracting" additional very faint data, we would remove from the image the fact that the IFN is less dense in those areas in contrast with their surroundings. This brings up a difficult question that I'll address in a second.

  • There are some bright circles in our targeted area. That's ok and in fact not a bad thing as we shall see in a moment.
So what should we do with these "dark islands"? We have three options:
  1. Do nothing, and abandon this session. If we do this, our image will not document the fact that above the arches there are differences in IFN density.

  2. Leave the mask as it is. This will reveal the IFN above the arches, but our final image will remove or attenuate at least the also very important detail that in the "black island" areas there is an important change in IFN density.

  3. Manually make sure the "black islands" also get mask protection.This will reveal the IFN above the arches, and preserve the fact that the areas now targeted by the "black islands" have a lower IFN density.

Based on the above three options, I determined that the third option is the one that better reflects "where there is IFN and where there isn't".I conclude then that manually masking those areas is the best decision in order to meet our goals, and that the documentary value of the image is greater by choosing that option.

Although we have decided to manually mask all dark areas other than the area above the arches, before doing that, though, in order to smooth transitions when we use our mask, we'll apply an ACDNR (noise reduction) without any type of protection adjustments or luminance mask. The reason we do this fist is to see how an ACDNR could contribute to making these "stray" dark areas perhaps more appropriate for the case at hand without having to manually adjust the mask. After applying the ACDNR, this is what we get.

Before going any further, we notice three things:

  • First, we see that the ACDNR didn't do much as far as fainting the "secondary black islands".We therefore will be using the clone stamp (there's no Paintbrush in PixInsight) to manually make these areas white.

    Using the clone stamp (or a paintbrush) to "mess up with masks" is quite acceptable to some people but almost sacrilegious for others. For this second group,the message is: We are about to apply a histogram stretch with a mask. This means we have already made the decision that we want to brighten some areas in our image while not doing so in others (not only that, we're stretching only very large scale structures, so we're even more selective).And although some may think that the moment we use the clone stamp (or the paintbrush) to alter a mask we are introducing a very arbitrary processing to our image, I have tried to explain the thinking process that led to this decision. It is a 100% reproducible and non-arbitrary process that rather than ignoring what the data tells us, it contributes to a better representation of the real differences in IFN density.

    We will then proceed to "whiten" these black "spots" with the clone stamp tool.

  • Second, the tiniest stars that were still"seen" in the large dark area of our mask are pretty much gone after applying the noise reduction. That is good.

  • And last, we notice that there's still a blur for the areas where the brighter stars were (mainly 2-3 in this case). That is not only ok,is actually good. This is because one could expect that when we stretched our"MS" image, there might still be some residual flux from these stars in that image. By having a mask that gradually "protects" this area, we avoid to"reveal faint stuff" that is in fact, nothing but "glow" from a bright star.
 

With all this done, it is time to apply this mask to our original image.Areas in red are the areas that are being protected. Notice the "black islands"are gone but not so the bright spots over the few bright stars that are not protected.

All is left for us to do is to add back all of our previously broken down images:
  • Our image with the small scale structures.
  • Our image with the large but not so large scale structures.
  • Our image with the very large scale structures.
We of course use PixelMath for that. The operation is a simple addition:SS+LS+MS. We must make sure the "Rescale result" option is active, so the result of this operation will produce an image within the allowed dynamic range.And because we're using a mask, any processing we've done in the separate images will only affect the areas unprotected by the mask.

As an alternative, we could use different formulas that I'll mention in a minute.

And this is a closeup of our final image over the area we've targeted:

The difference from our original image is not dramatic. Yes, this is faint stuff and we want it to stay that way. We could have pushed the contrast in these areas of very faint dust to get a better view of the structures. The reason we didn't is because we didn't t want to make these structures as visible as the rest - that would indicate that the density of IFN in these areas is similar to that in other areas. This may be in fact a perfectly valid goal in some cases, enriching the documentary value of the image, but since for the entire image we've tried to keep a good "IFN density balance", we've decided not to break such balance this time.

Now, instead of the SS+LS+MS formula, we could have used other formulas that would tend to maximize or minimize the effect, using our own criteria. For example, instead we could have used:

  • Original+SS+LS+ML. This will minimize the processing done on this faint stuff by including the original image - remember, we're rescaling the result of this addition, so including the original image in the equation will produce a result more similar to the original image.

  • SS+LS+(MS*x) where x is a number that, if less than one, will also minimize the stretch done on that image, and if larger than one, it will emphasize it even more.

  • Same as above but using different ratios for each - or some - of the sub-images.

And other variations. Again, although a simple addition will probably give us a result in accordance to what we want, experimentation can be fun...

You may be wondering... If we only modified the MS image, why did we need to extract the intermediate LS image? Couldn't we just extract the SS and MS images so that they both add up nicely when they're put back together?

The answer is yes, we could have done that. The purpose of creating an intermediary large scale image is actually twofold:

  • First, it gives us a chance to experiment modifications in either or both of the LS and MS images. In this tutorial we finally didn't modify the LS image, but that's because we've skipped some of the"experimental" procedures that I did when processing the image. It was after experimentation using different methods that I decided to make modifications only on the MS image and not the LS image.

  • Second, even if we ended up not modifying the LS image, when we add everything back together, due to the fact we've considerably altered the luminosity of the MS image, if we were to add only the MS and SS images (assuming our LS image was the result of subtracting the original image from MS, which is not), the result would likely NOT be well balanced. By including an intermediate image, we help rebalancing the luminosity of the image. This is a similar effect to adding our scaled images along with the original.

5 - Data or artifacts?

There's a few things we can do to see whether this "dust" we now see is real or not. As a first step we're going to compare the image we started with against tour final image. To do that, we invert each of them, and then do an aggressive histogram adjustment to each image, separately, until both of them have similar intensity strength (meaning we'd have to go a bit further stretching the original image), then compare them:

We can see they look extremely similar, which means we haven't added anything to the final image that wasn't there before. We've simply made it a bit more visible, without inducing perceptible noise nor degrading the already bright structures in that area (in this case just stars) or anywhere else on the image.

However, this compares our original already-processed image with our result. We have not verified that the structures we've enhanced compare well with our data as it was when the image was still linear. To verify that, we will do the same but comparing the original linear image with the other two: the image we used before starting this session and our final image at the end of it:

As we can see, despite mainly qualitative details and some uncorrected seams and defects in the original linear image, the areas above the arches where the IFN intensity is higher seem to match nicely across all three images, in particular the 3-shaped cloud in the middle. Therefore the conclusion I reach is that this processing session yielded acceptable results and did not fabricate any non-existing signal.

6 - Wrap up

This tutorial shows one very simple way to use multi-scale processing. Someone could think that "all this" could have been reduced to create a duplicate layer, blur it, create a third layer with the difference,stretch the blurred layer it and "blend" it (add it) with the original and third layer while using a mask. If we want to simplify the description of the process, you could in fact say that, but the main difference is not only that eyeballing a Gaussian blur we could much easier  induce unwanted and ambiguous "signal", but especially that by using wavelets we've operated under a much more controlled environment.

Having said that, the thing is that "all this" isn't complicated at all (I maybe made it look complicated by writing not only about what we do, but also why we do it, what other options we have, and a thinking process), but the fact is that "all this" can be wrapped up into four very simple steps:

  • Break the image into small, and large scale structures
  • Lightly stretch the image with the large scale structures
  • Create a mask so the process only happens on the darkest areas of the image
  • Add everything back together

For someone with some experience working with PixInsight, this session could be carried out in just 10 to 15 minutes.

The possibilities however go well beyond this, and depending on our goals, we could use this technique to do many very different tasks. For example, we could use wavelets over either the large or small scale images, to enhance the details at those scales. Or we could apply noise reduction over some - or all - of these images to attack noise at different scales (the ACDNR tool however is flexible enough to allow us doing that without us having to break down the image), adjust for saturation at different scales, and a very long etc.

Luminance Processing - Making the IFN pop

Posted: May 1st, 2010

Intro

The purpose of this tutorial is to show how, mainly with histogram adjustments, very faint detail - in this case integrated flux nebulosity or IFN - can be extracted from an image to a point it would be similar to the "faintness" we would find in brighter objects. In other words, this is not a "from raw to jpeg" tutorial.

The image

The complete image is  my IFN Wide Field 2x5 mosaic. I originally captured the first 8 frames on April 6, 7th and 8th, 2010, and I decided to process them after a not very promising weather forecast ahead that would delay the capture of the last two frames. However, one week later, on April 16th, I was able to get out one more time and captured these two additional frames. Since I didn't want to start processing everything all over again, I processed these two frames individually before bringing them to the larger mosaic. Processing a mosaic in several steps however is not recommended!!

This tutorial describes the first part of the luminance processing for these two frames. The two master luminance files are available upon request.

NOTE: The screen-shots are wide because they were taken on a 2-monitor system, which is how I usually work, generally placing the image on my best monitor and the processing tools on the "bad" one :-) This, unfortunately, forced me to reduce the resolution of the screen-shots. However, if you would like to see the screen-shots at their full resolution, simply click on the corresponding "tiny" screen-shot (JPEG compression artifacts are however present and do impact the quality of the image).

The Data

The captured data was of good quality, but not a lot of it. The master files were generated from 6 subexposures of 15 minutes each, with a SBIG STL11k camera and a Takahashi FSQ106EDX telescope with the 0.7x reducer. The data was captured at Lake San Antonio, California - a dark site that sits right at a "gray" border in the Bortle scale, surrounded by blue and some green/yellow areas. For a light pollution map of Lake San Antonio, click here and look for its location on the bottom-right corner of the image.

1 - Aligning and balancing the mosaic frames

First, both frames are registered and calibrated individually, with their darks, flats (a unique set for each frame) and bias. I used DeepSkyStacker for that, but of course, any other software (PixInsight, MaximDL, CCDStack, etc) could be used as well. After that, we load both master luminance files in PixInsigth and using a STF, we crop the images to eliminate "bad" edges. After this step, I would usually adjust for background gradients using PixInsight's DBE.

After the two master luminance files are ready to be put together, we run the StarAlignment tool, setting it to "Register/Union Mosaic" and selecting the "Generate masks" option (we're going to need those masks). The "Frame adaptation" option is nice in general cases to correct for differences in background illumination and signal strength, however, I've noticed it sometimes can wipe out very faint signal in the background. Since our goal here is to reveal as much IFN as possible, we do not check that option.

After StarAlignment has done its job, we can see both frames already nicely put together. As it usually happens with astroimages, the images are very dark and we can barely see anything but stars and a faint trace of M81.

We now run the ScreenTransferFunction (STF from now on) and click the "A" icon to let PixInsight do a very aggressive automatic screen stretch, so we can see what's there. Remember that STF does not change our data, it only "stretches" it for our screen.

After this aggressive screen stretch we can already see there's "stuff" in the background. We also see that the background illumination and the signal strength are not uniform among both frames.

We now apply the mask generated by StarAlignment, so everything we do will only affect one frame (the one we want to correct for background and signal uniformity), and using PixelMath we enter the formula ($T - Med(A9))*k1 + Med(A9)*k2 , where $T is the image to which we'll apply the PixelMath equation, and A9 is the second frame. The first part of the equation ($T - Med(A9))*k1) corrects for differences in signal strength, while the second part Med(A9)*k2 deals with differences in background illumination. The two variables k1 and k2 are calculated by trial and error, until we find the best seamless transition between the two frames. In this case, the best values seemed to be 0.99  for k1 (corrections in signal strength) and 1.121 for k2 (background).  Notice the formula does not break the linearity of the data.

After applying PixelMath, we can see that the images now match much better. If we still saw some differences, they should be corrected at this point, backing up and applying one or more DBE (background models) before the frame adaptation process.

Remember that what we're seeing is a very aggressive  screen stretch, and that other than the linear correction we just made with PixelMath, the data is still untouched (if we ignore the registration and alignment process), and still linear. This means that if we wanted to perform some tasks that are better done when the image is still linear, such as a deconvolution, we could do it. A deconvolution in this example would probably cause more trouble than what is worth, though.

Now we'll crop the image to get rid of the funky borders:

2 - Histogram adjustments, then some more

In some situations I would do a DBE right at this point. Although a carefully executed DBE could improve the uniformity of our background,  I chose not to do it in this case, to avoid constructing a background model that could alter the natural but very faint illumination from the IFN. This is not a problem with DBE but rather, I'm trying to avoid a possible problem with me!

We're ready then, for a first non-linear histogram stretch. This means we'll deactivate the STF now, as, from now on, we want to see what we're really doing.

This first histogram stretch doesn't do much to bring out the faint detail we seek. Also notice that when I do histogram adjustments that I consider critical, I resize the histogram dialog a lot, covering my entire left monitor! This by the way, is not a requirement, just something very nice to have :-)

If you look closely, in this stretch, PixInsight is telling me I'm "dark clipping" 20 pixels (a 0.0001% of the pixels in the image). If you click on the screenshot, the top number under the histogram graph in the middle is giving us that information.We definitely do not want to clip the histogram, especially at this early stage, but something like 0.0001% is acceptable. 20 pixels, that's okay.

Some people say that histogram adjustments should be done just once.  Personally, I not only don't see why, I think that more than one histogram adjustment, if done carefully, does more good than bad. So here's a second histogram adjustment.

This time we're clipping the black point a 0.1177% according to PixInsight. That's still very ok. It's still a very insignificant amount and it helps us get some contrast. Also notice that to perform this stretch, I zoomed in the X axis of the histogram by a factor of 9. This helps a lot in finding the point where the histogram curve starts to rise. Likewise we could zoom in the Y axis if we liked. The histogram tool in PixInsight is, no doubt, the way a histogram tool should be for processing astro-images.

QUICK NOTE: Although I believe no processing tutorial should be understood as a cooking recipe, if you really want to replicate the exact histogram adjustments I did in this example, click on the screenshot to view the large image, and take note of the Shadows/Highlights/Midtones values.

Hmm... My notes say I only did two histogram adjustments in a row, but my screenshots show a third one. Ok, that's just fine :-)

The truth is, I will perform as many histogram adjustment iterations as I feel I need, as long as the resulting image looks better to me, I'm not clipping it, and I'm not going overboard either. When I see that I cannot improve the image to my satisfaction anymore (again, without clipping it), then I stop.

 

This may be a good time to see what's going on with the galaxy M81  (the big white smudge on the bottom-right, around 5 o'clock)...

As you can see, after these histogram stretches, M81 is starting to look a bit saturated. It's not completely blown up, but it's getting there.... Here's the deal... I know that in the 8 mosaic frames I've already processed, M81 is already there (barely, but it's there), nicely processed an all, so when it's time to overlap these two frames with the already processed mosaic, I'll just place  this layer under the already processed image. In other words, I'm not too concerned about M81 in this image, which gives me more freedom to work my way to make the IFN pop nicely.

Having said that, the short answer for dealing with this is of course using a "gradual lightness-based mask with variable mid tones and white point". Ok, that was a mouthful, but in practice is very simple. The trick is, we don't want the mask to fully protect the galaxy, this being done in two different but related fronts:

  • By building the mask based on the lightness, the mask would be gradual, that is, it will be brighter in the brighter areas of the galaxy (protecting them better from saturating), and fainter in the outer arms.

  • In addition to that, we do NOT want the mask to fully protect our image. That is, we want our stretches to actually affect the galaxy, just not nearly as much as everything else. To do that we "dim" the mask by lowering its white point. This process of adjusting the white point of the mask right where the galaxy is should be iterative for every new histogram stretch we perform - that is, after we've done our stretch, with the mask in place, we adjust the histogram of the mask until we see the galaxy (or whatever other object we're protecting) blend well with its surroundings. Of course we work this with a real-time preview.

3 - This is getting noisy

If our goal is to reveal all this faint stuff in the background, we can see we're starting to get what we want. However, this comes to a price: our stars are getting fatter (especially the very small stars are becoming anything but subtle details) and as we stretch this faint signal, we're also bringing up a lot of noise, and most of this signal we want so much shows itself as having a very poor signal-to-noise ratio. As for the stars, since we have not  adjusted the white point even once, the problem is not very severe, fortunately.

If you click on the screen-shot you'll see the image really is getting noisy. Unfortunately, the JPEG compression even for the "real sized" screen-shot, and the fact that even the large screen-shot shows the image at a reduced size, doesn't give a good picture of what we're dealing with, but let's just say that yes, the image is getting noisy because not only we just stacked 6 subexposures during stacking/calibration, but the IFN sits barely above the noise.

So in any case it's clear that we need to solve this two problems, and quickly. It makes sense to "correct" for the noise first, as the operations to correct the stars will work better on an image with a better SNR.

To deal with the noise, my favorite tool is the ACDNR tool from PixInsight. ACDNR is a very CPU intensive operation, so while we tweak the parameters, it's better to experiment with a preview, as shown in the screen-shot below. We try to select an area in our preview that has some variety: very low SNR areas, low SNR areas and if possible, also some high SNR areas. We don't really have any area high in SNR here, so an area with very low and low SNR areas works ok.

A good understanding of all the parameters in ACDNR is important, so if you're not familiar with this tool, refer to the ACDNR section in my Unofficial PixInsight Guide.

In this case, we definitely check the "lightness mask" option. We also used the default edge protection values. I'm not sure why I didn't take advantage of the prefiltering process, as it does an excellent job with very noisy images (maybe I did, but my notes don't mention it and I'm just reading from the screen-shot). I would definitely suggest to preview the effects of using the prefiltering process in ACDNR. Regardless...I chose the morphological median "robustness" as it works better to preserve sharp edges.

As for the amount and iterations parameters, I always favor small amounts but several iterations (it usually works better and more gradually). Last, I used a high value for the standard deviation because the main purpose of reducing the noise at this point - although we know there's going to be little background "free" in our image - is to attack what otherwise would be considered "background noise", and high StdDev values usually work best for that. This is somewhat contrary to what using a high value as StdDev really means: it means we're going to reduce large scale noise, and what we have here is noise at different scales. But it works well, as long as you don't overdo it.

After applying the ACDNR process, I go, once again, to the histogram tool, and see if something can be squeezed out of there now that some of the noise is gone. Usually, there is, and our faint stuff is looking better. Here's the image after our fourth histogram adjustment. It may look dirty (I don't claim all the noise is gone!) but in this small screen-shot, what you feel it's noise, is in fact, caused by the thousands of stars that are overwhelming the image. That and the JPEG compression artifacts in the screen-shot.

4 - Star "management"

Now that we've stretched our image a great deal and somewhat dealt with the noise, it's time to do something about the stars. For this, we'll use the MorphologicalTransformation (MT) tool in PixInsight. For  those not familiar with MT, you can think of it as an extremely fancy and  customizable maximum/minimum filter - but seriously extremely versatile!!

Applying a MT to an image to correct the stars unquestionably requires a star mask that protects everything but the stars. Without a star mask, the MT would be applied to everything in the image. Not only the results will not be satisfactory, a true MT takes a well-defined structuring element as a model for the transformation. In this case the structuring element is a circle (stars are, well, round), and so it makes sense to apply the transformation only to circled structures. A star mask that  protects everything but the stars does that.

So we start by creating a good star mask...

A star mask can be easily created with PixInsight's StarMask module, although for this occasion I chose to create it "manually" using the ATrousWaveletsTransform (ATWT) tool. This is easy - just eliminate from a duplicate of the lightness the "R" (residual) layer, on a 4 layers dyadic sequence, and we're done.Of course, you can do further adjustments to your mask, depending on what you want the mask to accomplish: increase the bias level at different layers, or the noise reduction, etc.

I also often like to binarize the star mask and then blur it a bit with a Gaussian Blur - MT will work well with such strong masks. In this case however I chose to simply adjust the white/mid points of the histogram to make the mask stronger - which also works well. Here's how the star mask looks like:

And here's our image protected by the star mask (the areas in red are the protected areas):

With everything but the stars protected, we adjust the parameters of the MT and apply. In this case I opted for a small structuring element (3x3) as it seemed to work better than larger ones which were introducing a few halos around some stars. These halos can be corrected by adjusting MT's  threshold parameters, but in this case I just went with the 3x3. I used the Morphological Selection as my operator choice.  Most people tend to use the "erosion" or "closing" methods but "morphological selection" is usually my favorite because it acts as a blend between the erosion and dilation methods, and then I can just adjust the Selection slide to emphasize either the dilation or the erosion effect, but other operators can also do a decent job.  

Here's the image after the morphological transformation. Notice how the very tiny stars are no longer disturbing the scene, and the longer ones are not "in your face" anymore. This also helped clean up the image from the "in your face" panorama of an overwhelming field of stars.

Final words

Although the image is certainly not finished, as a result of our last operation, the IFN has become the main "attraction" in our image, and we can see  wisps of it all over the place. Here's a slightly bigger view of the image as it is at this point (click on the screen-shot for a larger version):

As I just mentioned, the processing of our luminance data is clearly not finished. We need to reduce the flux and halos in the brighter stars if we like (I often do). And then comes the color, which, after processed independently, will make us do some adjustments to the image after combining it with the lightness (delinearized luminance). And on top of that, in this particular case we'll then need to make sure everything will match nicely with our already processed 2x4 mosaic (one big reason why mosaics should be processed all at once, and not in pieces, if at all possible).

The good part is, we've achieved our goal of making the IFN pop nicely, without an overwhelming amount of noise (although the compressed JPEG artifacts may give the wrong impression) or the stars "eating" up the view, and further processing on this image shouldn't be as demanding and more similar to processing a "regular" astro-image.

We haven't used the curves tool, DDP, wavelets (other than to build a star mask), HDR tools, etc. In a nutshell, all we've really done is:

  • Three histogram stretch iterations
  • One pass at reducing the noise
  • Another histogram stretch
  • One morphological transformation to reduce star size and impact
  • One last histogram stretch

In the next tutorial, also using this image of the IFN as example - just not these two frames - I will describe a way to make pop some of the very faintest stuff on this particular image that are sitting in an area that after all the techniques described in this tutorial and more, was still presenting a very dark sky background - using some simple multi-scale techniques.

Formulas for Photoshop blending modes

Posted: April 21st, 2010

Do you want to apply one of the Photoshop blending modes to two images but using a PixelMath-like tool? Here's a list of the current Photoshop blending modes and their equivalent PixelMath formula that I could find. While some of the formulas are precisely what Photoshop does, others are just an approximated guess. Also, blending modes that cannot be achieved by a straight PixelMath operation - such as Luminosity, Hue or Color - are excluded.

The formulas below assume the pixels in the image have a numeric range between 0 and 1, which is the default in PixInsight

In most cases, in order to mimic Photoshop's behavior, the option "Rescaled" in PixInsight's PixelMath should be checked, particularly those modes that could generate out-of-range values. Some other times, it doesn't matter, such as in the Darken and Lighten modes.


Blend mode Commutativity Formula Addtl .info
Darken commutative min(Target,Blend)         
Multiply commutative Target * Blend         
Color Burn non-commutative 1 - (1-Target) / Blend         
Linear Burn commutative Target + Blend - 1         
Lighten commutative max(Target,Blend)         
Screen commutative 1 - (1-Target) * (1-Blend)         
Color Dodge non-commutative Target / (1-Blend)         
Linear Dodge commutative Target + Blend         
Overlay non-commutative (Target > 0.5) * (1 - (1-2*(Target-0.5)) * (1-Blend)) +
(Target <= 0.5) * ((2*Target) * Blend)
A combination of multiply and screen.Also the same as Hard Light commuted
Soft Light non-commutative (Blend > 0.5) * (1 - (1-Target) * (1-(Blend-0.5))) +
(Blend <= 0.5) * (Target * (Blend+0.5))
A combination of multiply and screen(The formula is only approximate)
Hard Light non-commutative (Blend > 0.5) * (1 - (1-Target) * (1-2*(Blend-0.5))) +
(Blend <= 0.5) * (Target * (2*Blend))
A combination of multiply and screen. Also the same as Overlay commuted
Vivid Light non-commutative (Blend > 0.5) * (1 - (1-Target) / (2*(Blend-0.5))) +
(Blend <= 0.5) * (Target / (1-2*Blend))
A combination of color burn and color dodge
Linear Light non-commutative (Blend > 0.5) * (Target + 2*(Blend-0.5)) +
(Blend <= 0.5) * (Target + 2*Blend - 1)
A combination of linear burn and linear dodge
Pin Light non-commutative (Blend > 0.5) * (max(Target,2*(Blend-0.5))) +
(Blend <= 0.5) * (min(Target,2*Blend)))
A combination of darken and lighten
Difference commutative | Target - Blend |         
Exclusion commutative 0.5 - 2*(Target-0.5)*(Blend-0.5)

FSQ106 ED/EDX + Extender Q + STL "HowTo"

Posted: March 1st, 2009

If you have a Takahashi FSQ106ED (or EDX) telescope with the camera rotator (CAA #17), and a SBIG STL camera, and would like to use the Extender Q, the combination of adapters you need can get a bit tricky. Below I explain in detail a combination that works very well for me.

These are the pieces and adapters you'll need - in addition to the scope, camera rotator and STL camera, or course.

If you use a FW-8 filter wheel instead of the internal wheel, don't get the TCD0012 part. Get the TCD0012S instead. In the US I believe the only place you can purchase these new is at TNR.

First, you'll need to buy or make shorter thumbscrews for your Visual 2":

I was able to find them at a local Home Depot, so hopefully it won't be hard for you to find them. Depending on your ability, it might be easier just to cut short the ones that come with it. I wanted to keep the originals just in case, that's why I decided to get a new set. If you get the Hex Allen type,even better.

Now, why use shorter thumbscrews? Because this piece will go all the way inside the TRDC0106, like shown in the two images below, and the original thumbscrews that come with the Visual 2" are just too long, so you won't be able to insert the Visual 2" all the way inside the TRDC0106 unless you use shorter thumbscrews.


Now that you know that, let's build the optical train.

Step 1

The first thing you need to do is to put the extender inside the Visual 2", and tighten the (shorter) thumbscrews. For safety, don't just rely on your fingers.

You do this first because if you did step 2 first, you wouldn't be able to do this step aftwerwards - the Visual 2" tumbscrews would be inside the TRDC0106, no way you'd be able to tighten them then!

Step 2

Now you insert the Visual 2" inside the TRDC0106. By the way, you could - in fact I recommend it -put the TRDC0106 on your scope already, and build the optical train as you go.

Step 3

Now it's time to put the CA35 inside the extender. Make sure the CA35 thumbscrew is tight. Not only you're going to hang a very expensive 4lb CCD later, you want to mak esure the optical train stays as straight as possible to avoid flexure and other evil things.

Step 3

Now it's time to put the threaded TCD0012STL into the CA35.

Once that's done, all you have left is to connect the STL to the TCD0012STL and you're done.

Happy imaging!!

Home | Articles/Blog | Tutorials | About me | My equipment | Favorite locations | Other sites

DeepSkyColors is licensed under a non-commercial, non-derivative Creative Commons License.
See required attribution line here
For commercial use of my images, please contact me.