PixInsight 1.5

The Officially Unofficial Reference Guide

Rev.0.1 – 3/29/2010

Section 6: Deconvolutions

Deconvolution

In the imaging world, deconvolution is the process of reversing the optical distortion that takes place during data acquisition, thus creating clearer, sharper images. Deconvolution works by undoing the smearing effect caused to an image by a previous convolution with a given PSF (Point Spread Function).

The Deconvolution tool is PixInsight's implementation of Richardson-Lucy and Van Cittert deconvolution algorithms, plus wavelet-based regularization and deringing algorithms. Regularized deconvolution works by separating significant image structures from the noise at each deconvolution iteration. Significant structures are kept and the noise is discarded or attenuated. This allows for simultaneous deconvolution and noise reduction, which leads to robust deconvolution procedures that yield greatly improved results when compared to traditional or less sophisticated implementations. Unless used for a purpose other than true deconvolution, the deconvolution tool should only be used on linear images.

For a detailed theoretical and practical explanation of deconvolution, please visit http://pixinsight.com/examples/deconvolution/Gemini-NGC5189/en.html

PSF

PixInsight provides three ways to define the type of PSF for the deconvolution algorithms: Gaussian, motion blur, and external.

Gaussian PSF

This is the most commonly used deconvolution method, as it attempts to deconvolve the most common convolution distortions errors found in astronomy images, such as those caused by atmospheric turbulence.

StdDev.: The value for the standard deviation of the PSF distribution.

Shape: It controls the kurtosis of the PSF distribution, or, in other words, the peakedness or flatness of the PSF's profile. When the shape value is smaller than 2, we have a leptokurtic PSF with a prominent central peak. When this value is greater than 2, the PSF is mesokurtic, with a flatter profile. When this value is equal to 2, we have a pure normal (or Gaussian) distribution. Strictly, when this value is not 2, we are not defining a Gaussian distribution at all; however we informally speak of the Gaussian family of PSFs because their formulations are all almost identical.

Aspect ratio: Aspect ratio of the PSF (vertical/horizontal aspect ratio).

Rotation: Rotation angle of the distorted PSF in degrees. It is only active when the value for the aspect ratio is smaller than 1.

Motion Blur PSF

Use the Motion Blur PSF in cases where you have tracking errors parallel to the x or y axis of the chip or similar situations that generated unidirectional motion blur distortions.

Length: Value of the motion length PSF, in pixels.

Angle: Rotation angle of the motion length PSF in degrees.

External PSF

Use this option when you want to define the PSF based on an existing image. In theory, the image of a star is the best option, but in the practice the results may not be good. Experiment modifying it with morphological filters, curves and the histogram. Also, it is important that the star has been very well centered on the image to be used as PSF, or the deconvolved image will be shifted.

View Identifier: The view (image) selected to define the external PSF.

Algorithms

In this section we define the deconvolution algorithm we wish to apply. PixInsight provides two options and their regularized versions:

Richardson-Lucy: In general, Richardson-Lucy is the algorithm of choice for deconvolution of deep-sky images.

Van Citter: The Van Citter algorithm is extremely efficient for deconvolution of high-resolution lunar and planetary images due to its ability to enhance very small image structures.

Regularized Richardson-Lucy: Regularized version of the Richardson-Lucy algorithm (read the regularization section below to learn more about regularization).

Regularized Van Citter: Regularized version of the Van Citter algorithm (read the regularization section to learn more about regularization).

Iterations: Maximum number of deconvolution iterations.

Luminance: Apply the deconvolution only to the luminance of the target image.

Linear: Use the CIE Y component instead of the CIE L* as the luminance of the target image. Enable this option to deconvolve the luminance of a linear RGB color image (no separate luminance). Examples: DSLR and OSC CCD images. Disable this option to deconvolve the original luminance of a LRGB linear image. In all cases, a linear RGBWS must be used (gamma=1.0).

FFT: Minimum PSF size to use FFT (Fast Fourier Transform) convolutions.

Deringing

PixInsight offers two deringing algorithms available on Deconvolution: global and local deringing. Global deringing is similar to the deringing features used in other process tools such as ATrousWaveletTransform, UnsharpMask and RestorationFilter. Local deringing improves protection around small-scale, high-contrast features and requires a deringing support image, which is basically a star mask when working with deep-sky images.

Global dark: Global deringing regularization strength to correct dark ringing artifacts.

Global bright: Global deringing regularization strength to correct bright ringing artifacts.

Local deringing: Enable this option to apply deringing by using a deringing support image, usually a star mask.

Local support: Specify the identifier of an existing view to be used as the deringing support image. It must be a gray scale image with the same dimensions as the target image. The deringing support is optional; if you don't specify it, the global deringing algorithm will be applied uniformly to the whole image. The deringing support allows you to use a star mask to drive a local deringing algorithm that can enhance protection of stars and other high-contrast, small scale image structures.

Local amount: Local deringing regularization strength. This value will multiply the deringing support image (internally only; the support image is not modified at all). This way, you can modulate the local deringing effect.

Wavelet regularization

These parameters define how the algorithms perform separation between significant image structures and the noise at each deconvolution iteration, and how noise is controlled and suppressed during the whole procedure.

Noise model: The regularization algorithms assume a dominant statistical distribution of the noise in the image. By default, Gaussian white noise is assumed, but you can select a Poisson distribution. In general, you'll see little differences, if any, between the results obtained under both noise models.

Wavelet layers: This is the number of regularization wavelet layers used to decompose the data at each deconvolution iteration. This parameter can vary between one and four layers, but you should keep it as low as possible to cut noise propagation well without destroying significant structures. In most cases the default value of two layers is appropriate.

Next to the wavelet layers parameter, you can specify the wavelet scaling function to be used. This identifies a low-pass kernel filter used to perform wavelet transforms. The default B3 Spline function is the best choice in most cases. A sharper function, such as Linear, can be used to gain more control over low-scale noise, if necessary. The Small-Scale function is experimental as of writing this document.

Noise threshold: Regularization thresholds in sigma units. In other words, it specifies a limiting value such that only those pixels below it can be considered as pertaining to the noise in a given wavelet layer. The higher the threshold value, the more pixels will be treated as noise for the characteristic scale of the wavelet layer in question (either 1, 2, 4 or 8 pixels), that is, larger thresholds will apply noise reduction to more structures at each wavelet scale. Each row are the noise threshold values for 1, 2, 3, 4 and 5 pixel layer structures, respectively. Only the rows indicated by the wavelet layers parameter are applicable.

Noise reduction: Regularization strength per iteration. Its value represents the strength of the noise reduction procedure that is applied to noisy structures in a wavelet layer. A value of one means that all noise structures will be completely removed. Smaller values will attenuate but not remove them. A value of zero means no noise reduction at all. Each row are the noise reduction values for 1, 2, 3, 4 and 5 pixel layer structures, respectively. Only the rows indicated by the wavelet layers parameter are applicable.

Convergence: Automatic convergence limit in differential sigma units. A property of regularized deconvolution is that the standard deviation of the deconvolved image tends to decrease during the whole process. When the difference in standard deviation between two successive iterations is smaller than the convergence parameter value, or when the maximum number of iterations is reached —whichever happens first—, then the deconvolution procedure terminates. So when this parameter is zero (the default value), there is no convergence limit and the deconvolution process will perform the specified maximum number of iterations.

Disabled: Disables automatic convergence, that is, it instruct PixInsight to perform the specified maximum number of iterations (recommended).

Dynamic Range Extension

Use these sliders to increase the range of values that are kept and rescaled to the [0,1] standard range, and adjust for saturation during the deconvolution process.

These two parameters can be used, among other things, to help palliate the normal saturation effect that result from concentrating the flux of stars and other features when aplying the deconvolution, something that can also be aided by a simple star mask. For example, by increasing the high range extension parameter, Deconvolution will have a wider dynamic range to accommodate brightened pixels. The caveat is that the resulting dynamic range will be larger, which yields a darker image.

Low Range: Shadows dynamic range extension.

High Range: Highlights dynamic range extension.

RestorationFilter

The RestorationFilter process allows you to select among the Wiener and Constrained Least Squares algorithms to perform one-step, frequency domain based image restoration filtering. The Wiener and constrained least squares algorithms are well described in the literature. These algorithms are ideal for restoration of lunar and planetary images, as well as general-purpose image restoration tools.

RestorationFilter offers the same PSF options as the Deconvolution process window.

PSF

PixInsight provides three ways to define the type of PSF for the deconvolution algorithms: Gaussian, motion blur, and external.

Gaussian PSF

This is the most commonly used deconvolution method, as it attempts to deconvolve the most common convolution distortions errors found in astronomy images, such as those caused by atmospheric turbulence.

StdDev.: The value for the standard deviation of the PSF distribution.

Shape: It controls the kurtosis of the PSF distribution, or, in other words, the peakedness or flatness of the PSF's profile. When shape < 2, we have a leptokurtic PSF with a prominent central peak. When shape > 2, the PSF is mesokurtic, with a flatter profile. When shape = 2, we have a pure normal (or Gaussian) distribution.

Aspect ratio: Aspect ratio of the PSF (vertical/horizontal aspect ratio).

Rotation: Rotation angle of the distorted PSF in degrees. It is only active when the value for the aspect ratio is smaller than 1.

Motion Blur PSF

Use the Motion Blur PSF in cases where you have tracking errors parallel to the x or y axis of the chip or similar situations that generated unidirectional motion blur distortions.

Length: Value of the motion length PSF, in pixels.

Angle: Rotation angle of the motion length PSF in degrees.

External PSF

Use this option when you want to define the PSF based on an existing image. In theory, the image of a star is the best option, but in the practice the results may not be good. Experiment modifying it with morphological filters, curves and the histogram. Also, it is important that the star has been very well centered on the image to be used as PSF, or the deconvolved image will be shifted.

View Identifier: The view (image) selected to define the external PSF.

Noise Estimation

γ : Decrease this value to increase filtering strength. After this value there's other two parameters you can use to specify fine and coarse adjustments to the noise estimation, respectively.

Filter Parameters

Algorithm: Select which algorithm you want to use for the restoration filtering:

Amount: Filtering amount. A value of 1 will apply the filter at its maximum strength. Smaller values will decrease the strength of he filtering.

Luminance: Apply the restoration filter only to the luminance of the target image.

Linear: Use the CIE Y component instead of the CIE L* as the luminance of the target image. Enable this option to deconvolve the luminance of a linear RGB color image (no separate luminance). Examples: DSLR and OSC CCD images. Disable this option to deconvolve the original luminance of a LRGB linear image. In all cases, a linear RGBWS must be used (gamma=1.0).

Deringing

Dark: Deringing regularization strength to correct dark ringing artifacts.

Bright: Deringing regularization strength to correct bright ringing artifacts.

Output deringing maps: Generate an image window for each deringing map image. New image windows will be created for the dark and bright deringing maps, if the corresponding amount parameters are nonzero.

Dynamic Range Extension

The dynamic range extension works by increasing the range of values that are kept and rescaled to the [0,1] standard range in the processed result. Use the following two parameters to define different dynamic range limits. You can control both the low and high range extension values independently.

Low Range: Shadows dynamic range extension

High Range: Highlights dynamic range extension



Back to the Index | Copyright Note