Posted: April 26th, 2010
About HLVGHasta La Vista, Green
from now on) is my first attempt at writing a Photoshop plug-in. Although I don't use Photoshop much for processing my images, I've always been curious about the procedure of writing a Photoshop plug-in. The HLVG filter seemed a good way to start since it's a very simple plug-in.
HLVG is a chromatic noise reduction tool that attempts to remove green noise and the green casts such noise may cause in some images. It is based on PixInsight's SCNR Average Neutral algorithm.
The idea is not new. We all know that with a very few exceptions (some planetary nebulae, comets, etc), there are no green objects in the sky. Therefore, if we've already correctly calibrated and color-balanced an image and it's free of gradients (to the best of our ability at least), we have to assume that if something else looks green in our images it's got to be noise...Don't mistake gradients with noise. Removing gradients are best dealt by subtracting a good background model. Chromatic noise on the other hand is tricky, since it "overwrites" the real data we want.
There are several techniques widely used to deal with this problem, however most of them rely on selections and adjustments that sometimes are not easy to execute. SCNR (the base algorithm used by this plug-in) is in my opinion one of the most reliable methods to deal with "green noise" and it works the same every time, without having to worry about anything, just click OK and you're done.
The only process applied between the BEFORE and AFTER images below is the HLVG plugin with the Strong
option selected, over a "color blended" layer.
The following three examples are somewhat "extreme", and in fact a couple of them do include uncorrected gradients, that HLVG obviously does NOT correct (see section above), but hopefully they serve the purpose of showing the effect of applying HLVG over a "green polluted" image.
To use HLVG you need a computer running Windows (tested on XP, Vista and Windows 7, both 32 and 64 bits) and Photoshop (tested on Photoshop 7, CS2, CS3, CS4 and CS5). HLVG will likely work under previous versions of Windows and Photoshop, and likely newer ones, but I haven't tested it. If you successfully run HLVG in any of the not-tested versions of Windows and/or Photoshop, let me know
Why not Mac? ... Short answer: because I don't have one, therefore I cannot compile and test the plug-in for the Mac.
Downloading HLVG is easy, simply click one the links below:
For any non-64 bits version of Photoshop : DOWNLOAD HLVG
For Photoshop 64 bits ONLY: DOWNLOAD HLVG 64bits
By the way, HLVG is free (as in "free beer") and I want it to stay that way, so permission is NOT given to include this plug-in in any commercial package. If you downloaded HLVG, whether standalone or as a part of a package, and paid for it, please let me know. Having said that, if you find HLVG useful and would like to make a small donation, please use the "Donate" button below. The Donate button will take you toPayPal - don't worry when you see the donation goes to AR Networks. Yes, that's me.
Once downloaded, you'll need to unzip
the HLVG.zip. This will extract the HLVG.8bf file.
Once extracted, copy the HLVG.8bf file to the Plug-Ins directory in your Photoshop installation.This usually is something like C:\Program Files\Adobe\Adobe Photoshop CS2\Plug-Ins but it may be different depending on the operating system and the Photoshop version you're running.
After you've copied the file to the Plug-Ins directory, start (or restart) Photoshop.
When you're ready to use HLVG (you'll need at least one image loaded in Photoshop, preferably an image already color balanced, and with any gradients already corrected), go to the Filters
menu, find the DeepSkyColors
menu option, select it, then click on the HLVG
sub-menu option. If you don't see it there, chances are you did something wrong when you copied the HLVG.8bf file, so double-check you indeed copied it to the right directory. Again, don't forget to restart Photoshop anytime you copy a plug-in filter to the Plug-Ins directory so Photoshop knows it's there.
Strong, or Weak, depending on whether you want HLV go ly hard on getting rid of treen noise, t moderately or simply slightly. The recommended setting i>Strong.
A note about lightnessHLVG may affect the lightness of the image a bit (the L component of the image if converted to CIELab). This is because the current version of HLVG does not save the lightness prior to applying its "degreening" algorithm (it might in a future version).
In order to preserve the exact lightness as the original image, it is recommended to follow this process:
- Duplicate the layer where you'd like to apply HLVG.
- Make the blending mode of that new copy to "Color".
- Apply HLVG over that new layer.
- Merge that new layer back with the original
The above process will get rid of unwanted green noise and hues without affecting the lightness at all. HLVG doesn't degrade the lightness considerably, so using it directly doesn't do a bad job, although I do recommend following the method I just described.
Bit depthHLVG should work on images of either 8, 16 or 32 bit depth. If you find that HLVG didn't work with your image, again, let me know.
Color ModeHLVG only works well when the image is in RGB mode. You can still use HLVG when you're in Lab or CMYK modes for example - HLVG won't complain - but the results will not be what you were expecting. Although this is something HLVG should take care of internally - say converting the image to RGB mode internally, then back to whichever mode it was before - at this point HLVG does not check the current color mode being used, and it simply assumes the image is in RGB mode, so you must make sure your image is in RGB mode prior to using HLVG.
HLVG and masksIf you preselect an area in your image - either with the lasso tool, select range, etc. - and then run HLVG, you may be in for a disappointment when you notice that HLVG completely ignored your mask and applied the effect to the entire image. Although I believe that HLVG works best when we leave it up to the plug-in to decide what areas require "HLVG'ing" and which don't, I can understand this behavior of ignoring a mask may feel odd to some people.While there's a chance I may "fix" this in a future version, if you must apply HLVG to just a specific area of your image, you can still do it by creating a duplicate layer, applying HLVG, inverting the mask and hitting the Delete key to get rid of the area you didn't want to "degreen". I don't recommend it, but if you must, that workaround should do the job.
Posted: April 23rd, 2010
About two years ago, at one HOA meeting, we decided to shut down a private street light, mainly due to the fact that the light wasn't necessary (the area is already suffering from an excess of illumination).
To me it was very good news for a number of reasons. First because, although I barely do any imaging from home, with that light on, even narrowband imaging is greatly affected, as the light is precisely right in front of my "setup area" and in the part of the only usable sky I get from home (SE to SW, the light being at SE). Second, because the light is so bright and without any kind of shielding, that at night I can see the ceiling of my house excessively illuminated, not to mention how bright everything is even when all the lights inside my home are turned off - something that, leaving astronomy aside, is rather annoying to say the least. Of course I could use window blinds, but the question is... what is the point of making my house look like one of those monuments that are being lit so people can admire them? :-) Anyway, so we all agreed to turn that light off and it has stayed off ever since. Until...
A few days ago a city inspector took a walk around the complex, saw the light off and said "that street light should be on or you'd be violating city code", without even checking whether the light being off would in fact be against city code. Mind you, the street light is private and paid by us, it is not a city light, so all the city needs to make sure is whether the area is already sufficiently illuminated according to certain directives - which I can assure you it is. In any case, last night the light was on again.
Here's an image of the street light, taken before it was shut down about two years ago. The image doesn't do justice as far as the light being produced by this lamp because the photo was taken with flash, and so the exposure is just a split of a second.
I will now try to see if at least we can "officially" add some sort of shielding, but I'm not very optimistic. If I was doing little imaging from home before, with this light back on, the imaging time from home has just been reduced to zero.
Posted: April 21st, 2010
Do you want to apply one of the Photoshop blending modes to two images but using a PixelMath-like tool? Here's a list of the current Photoshop blending modes and their equivalent PixelMath formula that I could find. While some of the formulas are precisely what Photoshop does, others are just an approximated guess. Also, blending modes that cannot be achieved by a straight PixelMath operation - such as Luminosity, Hue or Color - are excluded.
The formulas below assume the pixels in the image have a numeric range between 0 and 1, which is the default in PixInsight
In most cases, in order to mimic Photoshop's behavior, the option "Rescaled" in PixInsight's PixelMath should be checked, particularly those modes that could generate out-of-range values. Some other times, it doesn't matter, such as in the Darken and Lighten modes.
||Target * Blend
||1 - (1-Target) / Blend
||Target + Blend - 1
||1 - (1-Target) * (1-Blend)
||Target / (1-Blend)
||Target + Blend
||(Target > 0.5) * (1 - (1-2*(Target-0.5)) * (1-Blend)) +
(Target <= 0.5) * ((2*Target) * Blend)
|A combination of multiply and screen.Also the same as Hard Light commuted|
||(Blend > 0.5) * (1 - (1-Target) * (1-(Blend-0.5))) +
(Blend <= 0.5) * (Target * (Blend+0.5))
|A combination of multiply and screen(The formula is only approximate)|
||(Blend > 0.5) * (1 - (1-Target) * (1-2*(Blend-0.5))) +
(Blend <= 0.5) * (Target * (2*Blend))
|A combination of multiply and screen. Also the same as Overlay commuted|
||(Blend > 0.5) * (1 - (1-Target) / (2*(Blend-0.5))) +
(Blend <= 0.5) * (Target / (1-2*Blend))
|A combination of color burn and color dodge|
||(Blend > 0.5) * (Target + 2*(Blend-0.5)) +
(Blend <= 0.5) * (Target + 2*Blend - 1)
|A combination of linear burn and linear dodge|
||(Blend > 0.5) * (max(Target,2*(Blend-0.5))) +
(Blend <= 0.5) * (min(Target,2*Blend)))
|A combination of darken and lighten|
||| Target - Blend |
||0.5 - 2*(Target-0.5)*(Blend-0.5)
Posted: April 21st, 2010
In the imaging world I often see people making comments about someone else's image having the colors too saturated, or too weak, or too strong, vivid, etc.
Truth is, in general terms, there's not much secret in getting the right amount of saturation we want. Here, right means what is right for us.
Depending on what tool you use, doing a saturation adjustment is accomplished by either moving a slider left and right, or drawing a curve...Seriously, how could anyone get that wrong? And wrong to the point that they will benefit from our feedback telling them they went too far or stopped too short?
Often times, any feedback about saturation is, therefore, in my opinion, just a waste of time. It's not contributing anything to the author's goal, just a statement about what our own goal would have been. Now... does this person want his or her image look the way we wanted, or the way they wanted?
Now... I said before that making a saturation adjustment is easy, but this too requires some know-how... Acquiring the right saturation level we're looking for sometimes it's not just about sliding away the saturation slider, and sometimes reaching that level of saturation may need some techniques so that noise is not also magnified, but my point is that I am assuming the person behind the image adjusted the saturation right where she or he wanted, and my feedback about how I feel about the saturation level is just as good as if I had said "I like your image" or "I don't like it"... In other words, a rather useless feedback.
There are a few cases however where feedback about saturation can be useful....
One of such cases is when the operator simply isn't sure if he or she gave the image the right amount of saturation. In that case, my feedback would be "well, what does it look right to you?" but there may be a few factors leading to the operator asking that question: maybe this person has little experience and they're trying to assess how their interpretation matches with other people's opinions. Or maybe the person was using a monitor that isn't well calibrated...
Another case would be when we know the goals the author was trying to achieve with the particular saturation intensity she or he used, and they're asking us if we feel such goals have been achieved.
Other than that, unless I have a strong feeling that such interpretation missed some valuable opportunity to depict something that would have contributed to the image being more consistent with that person's view, I will simply judge quietly whether I personally like his or her style/interpretation more or less, but take such interpretation at face value: this is how the author wanted to represent this image.
Posted: April 8th, 2010
Click here for a larger version
I've ended up calling this the 1,200 miles image, because that's the total number of miles I ended up driving in order to capture the data for this image. By the way, that is
a record for me! We do crazy things sometimes just to capture a bunch of photons!
The above wide field image features a large area of the IFN (integrated flux nebula) of the polar spur - probably the largest IFN image captured by an amateur so far. The field of view is approximately 15x9 degrees. The IFN is extremely faint, so much that almost every image taken of this area would either show no IFN at all of at most, a barely perceptible hint of it.
In simple terms, the IFN is dust clouds. However, unlike most known nebulae, they do not reflect, scatter or fluoresce due to the radiation of any individual star or cluster of stars, but do so from the integrated flux of all the stars in the Milky Way Galaxy. In other words, the IFN is illuminated by the glow of our own galaxy.
Steve Mandel once said that the IFN was like photographing something through a dirty window, the IFN being the dirt on that window, except that the "dirt" itself is beautiful to behold.
Because the IFN is so faint, capturing it is a challenge, as even under good skies it will sit barely above the noise. This means that once you've captured the data and try to bring the signal from the IFN, you will bring it along with the noise in the image, making it almost impossible to discern between noise and nebulae. This is the main reason most images won't show any or almost any IFN - as astrophotographers deal with the noise - trying to make it dissapear - the IFN will dissapear with it.
For that reason it is important to image this area from as dark skies as you can get. Otherwise, the sky glow will completely bury the signal from the IFN.
How to understand this image
From a scientific point of view, the mind establishes an order regarding what's being seen, because the IFN is the main structure and it appears well defined, but from other perspectives, the image may look simply weird.
Also, one may feel this image has a dirty look, perhaps even artificial, not only because of us looking at the sky through a "dirty window" but also because it would be impossible to preserve a "natural" balance when attempting to reveal the extremely faint IFN structures with so little data and without blowing up the brighter structures. The purpose of this image is to reveal those structures, not to create a natural and silk-smooth "pretty" composition. Beauty in this case is in the eye of the beholder.
Capturing the data
I captured this 2x5 mosaic (10 frames) first three days in a row, on Tuesday April 6th thru Thursday the 8th, and then during one more session on April 16th, 2010, taking advantage of an unusual break in the cloudy weather we've been having this year so far.
The first night I traveled to Dinosaur point to take the data for three frames but I wasn't very happy (too much skyglow), so I decided that the next two days I would travel to Lake San Antonio (about 170 miles from home) which would give me better skies (6.5 ~ 6.6 NELM approx). After all that was done, I went back to Lake San Antonio one third night on April 16th to take the last two frames.
In the end, I drove over 1120 miles during the four days to acquire the data. I believe on Friday 9th I was near a coma after the effort (ok not exactly, just BRUTALLY tired). Picture yourself leaving Lake San Antonio on Friday morning at 4am after three restless nights and already 620 miles driven, for another 160 miles drive back home, and around 6:30am as you get near your home, you run into the morning rush hour traffic... We are... a very nut crowd!
Each frame of the mosaic is only 90 minutes of luminance (6x15') and 27 minutes of RGB color (3x3x3'). I had to limit the exposure to such short times because I knew I would only have just enough clear nights this time around and even with that, I knew I could barely make it. If one thing went wrong, I wouldn't be able to finish it. This also meant that for the most part I would stay put checking the images during the capture at least every hour or so.
Every 2+ hours I'd shift to the next frame. Besides slewing to the next area, this involved taking flat images, rotating the camera,and doing my "manual" plate solving (I don't do plate solves, so the realigmnent - including camera rotation - is done manually, and taking test shots to make sure the camera is well positioned to cover the field for the next frame.
The small image in this page is downsampled so that it fits well in the page. The large image (the one you see when you click on the image) is downsampled from a larger version. Since the IFN is, as I mentioned earlier, just above the noise, I had to chose between leaving the image at its original size but with the obvious degradation in quality, or take advantage of the benefits of downsampling when "fighting the noise". Even with that you can still see the image is noisy but at this point my goal wasn't to produce a clean image as much as being able to capture all that is going on up there even if that meant producing a less than average quality image from an aesthetic point of view. As with most of my images however, the aesthetics still played a role, and that's why care was put in preserving details, avoiding blown up stars and galaxies, etc.
A big challenge was to join the different frames of the mosaic, where none of them had similar background and signal values, each had its own gradient issues, and even at times the SNR wasn't similar. This forced me to do some very careful but strong adjustments on each frame (using formulas suggested by PixInsight's Juan Conejero), applying synthetic background models, etc.
How to bring out all this faint detail that sits just above the noise, without making the image look like, well, crap? While several conventional techniques were used at different stages, it was probably the multi-scale techniques the ones that helped me bringing out the fainter details while preserving the integrity of the already bright areas. No "selective stretching" was ever done in the image to bring out the significant IFN signal - that is, I did not manually select areas that I later stretched, nor created masks to increase brightness selectively. While lightness-based masks were used at different stages in the processing of the image, none was used to "push" the signal of the IFN over the background. The reason for this is actually quite simple: if I did a "selective stretching" I would be dictating where there is IFN and where there isn't. By avoiding selective stretching, I let the data be the one defining where the IFN is and how much of it.
As for the color, obviously I didn't have enough data to get an accurate rendition of the color in the IFN, and what I've got wasn't deep at all. I knew beforehand I wouldn't have enough time to get deep and detailed color data, so I compromised with at least getting enough color for the stars and the field, hoping that the IFN would at least inherit some color from the background signal, which is most definitely the case here, and that's why the IFN has a brownish hue versus the more expected blueish cast - regardless, the IFN not only scatters blue light but is also fluorescing a broad red spectrum of light known as the Extended Red Emission (ERE), so the brownish hue acquired from my poor color data isn't completely off track.
Get a poster, t-shirt, mug, mousepad... with this image!
[Hide image details
DATEApril 6-8th and 16th, 2010
PHOTO2x5 Mosaic (10 frames total)
Exposure each frame: L: 6 x 15', RGB: 3x3' each,
Total: 19.5 hours
Focal: 385mm, f/3.6
EQUIPMENTImaging Scope: FSQ 106 EDX w/Reducer
Guide Camera: StarShoot Autoguider
SITE & CONDITIONSLake San Antonio/Dinosaur Point, California
Processing: PixInsight (80%) & Photoshop (20%)