The Officially Unofficial Reference Guide
The ImageIntegration tool performs a combination of an unlimited number of FITS files into a single integrated (stacked) image.
Add Files: Add image files to the list of input images.
Set Reference: Make the currently selected file on the list the reference image. The reference image is the first image in the list of images to integrate. Its statistical properties will be taken as the basis to calculate normalization parameters and relative combination weights for the rest of integrated images.
Select All: Select all input images.
Invert Selection: Invert the current selection of input images. That is, images that were selected will be unselected, and the rest of images (that were unselected) will be selected.
Toggle Selected: Toggle the enabled/disabled state of currently selected input images. Disabled input images will be ignored during the integration process.
Remove Selected: Remove all currently selected input images.
Clear: Clear the list of input images.
Full paths: Show the full path of each image in the list, rather than just the file name.
Combination: Select a pixel combination operation. The average combination provides the best signal-to-noise ratio in the integrated result. The Median combination provides more robust rejection of outliers, but at the cost of more noise.
Normalization: Image normalization for combination. If one of these options is selected, ImageIntegration will normalize/scale all input images before combining them. Note that these normalization and scaling operations are independent from the similar operations performed before pixel rejection.
Normalization matches mean background values. This can be done either as an additive or as a multiplicative process. In general, both ways lead to very similar results but multiplicative normalization should be used to integrate images that are to be further combined or applied by multiplication or division, such as flat fields.
Scaling matches mean dispersion. This can be used as a sort of an automatic weighting to integrate images with different overall illumination. This option is disabled by default.
Weights: Select an image weighting criterion. Exposure times will be retrieved from the standard EXPTIME and EXPOSURE FITS keywords (in that order).
The Noise evaluation option uses wavelet-based noise evaluation techniques to compute relative SNR values. Assuming that SNR is proportional to the square root of integration time for all input images, this is theoretically the most accurate approach for automatic image weighting.
The Average signal strength option tries to derive relative exposures directly from statistical properties of the image. This option will not work if some images have additional illumination variations, such as sky gradients.
If you select the FITS keyword option, please specify the name of a FITS keyword to retrieve image weights. The specified keyword must be present in all input images and its value must be of numeric type.
Weight keyword: Custom FITS keyword to retrieve image weights. This is the name of a FITS keyword that will be used to retrieve image weights, if the FITS keyword option is selected as the weighting criterion.
Evaluate noise: Evaluate the standard deviation of Gaussian noise for the final integrated image. Noise evaluation uses wavelet-based techniques and provides estimates to within 1% accuracy. This option is useful to compare the results of different integration procedures. For example, by comparing noise estimates you can know which image normalization and weighting criteria lead to the best result in terms of signal-to-noise ratio improvement.
Generate a 64-bit result image: If this option is selected, ImageIntegration will generate a 64-bit floating point result image. Otherwise the integration result will be generated as a 32-bit floating point image.
Close previous images: Select this option to close existing integration and rejection map images before running a new integration process. This is useful to avoid accumulation of multiple results of the workspace, when the same integration is being tested repeatedly.
Buffer size (MB): This parameter defines the length of the working buffers used to read pixel rows. There is an independent buffer per input image. A reasonably large buffer size will improve performance by minimizing disk reading operations. The default value is usually appropriate.
Use file cache: By default, ImageIntegration generates and uses a dynamic cache of working image parameters, including pixel statistics and normalization data. This cache greatly improves performance when the same images are being integrated several times, for example, to find optimal pixel rejection parameters. Disable this option if for some reasons you don't want to use the cache. This will force recalculation of all statistical data required for normalization, which involves loading all integrated image files from disk.
Reset file cache: If you enable this option, ImageIntegration will destroy its internal cache when this instance is executed. This implies recalculation of all statistical data required for normalization, which involves loading all integrated image files from disk.
Min/Max: The min/max method can be used to ensure rejection of extreme values. Min/max performs an unconditional rejection of a fixed number of pixels from each stack, without any statistical basis. Rejection methods based on robust statistics, such as percentile, sigma clipping (with or without Winsorization) and average sigma clipping are in general preferable.
Percentile clipping: Percentile clipping rejection is excellent to integrate reduced sets of images, such as 3 to 6 images. This is a single-pass algorithm that rejects pixels outside a fixed range of values relative to the median of each pixel stack.
Sigma clipping: : The iterative sigma clipping algorithm is usually the best option to integrate more than 10 or 15 images. Keep in mind that for sigma clipping to work, the standard deviation must be a good estimate of dispersion, which requires a sufficient number of pixels per stack (the more images, the better).
Winsorized sigma clipping: Winsorized sigma clipping is similar to the normal sigma clipping algorithm, but uses a special iterative procedure based on the Huber's method of robust estimation of parameters through Winsorization. This algorithm can yield superior rejection of outliers with better preservation of significant data for large sets of images.
Averaged sigma clipping: The averaged iterative sigma clipping algorithm is also a good option for small sets between 3 and 10 images. This algorithm tries to derive the gain of an ideal CCD detector from existing pixel data, assuming zero readout noise, then uses a Poisson noise model to perform rejection. For large sets of images however, sigma clipping tends to be superior.
CCD noise model: The CCD noise model algorithm requires unmodified (uncalibrated) data and accurate sensor parameters. If is only intended to integrate calibration images (bias frames, dark frames and flat fields).
Normalization: Normalization is essential to perform a correct pixel rejection, since it ensures that the data from all integrated images are compatible in terms of their statistical distribution (mean background vales and dispersion).
Scale+zero offset: Scale+zero offset matches mean background values and dispersion. This involves multiplicative and additive transformations. This is the default rejection normalization method that should be used to integrate calibrated images.
Equalize fluxe: Equalize fluxes simply matches the main histogram peaks of all images prior to pixel rejection. This is done by multiplication with the ration of the reference median to the median of each integration image. This is the method of choice to integrate sky flat fields, since in this case trying to match dispersion does not make sense, due to the irregular illumination distribution. For the same reason, this type of rejection normalization can also be used to integrate uncalibrated images, or images suffering from strong gradients.
Generate rejection maps: Rejection maps have pixel values proportional to the number of rejected pixels for each output pixel. Low and high rejected pixels are represented as two separate rejection maps, which are generated as 32-bit floating point images. Use rejection maps along with console rejection statistics to evaluate performance of pixel rejection parameters.
Clip low pixels: Enable rejection of low pixels. Low pixels have values below the median of a pixel stack.
Clip high pixels: Enable rejection of high pixels. High pixels have values above the median of a pixel stack.
Min/Max low: Number of low (dark) pixels to be rejected by the min/max algorithm.
Min/Max high: Number of high (bright) pixels to be rejected by the min/max algorithm. This option and the one above are only available when the Min/Max algorithm has been selected as the rejection algorithm.
Percentile low: Low clipping factor for the percentile clipping rejection algorithm.
Percentile high: High clipping factor for the percentile clipping rejection algorithm. This option and the one above are only available when the percentile clipping algorithm has been selected as the rejection algorithm.
Sigma low: Low sigma clipping factor for the sigma clipping and averaged sigma clipping rejection algorithm.
Sigma high: High sigma clipping factor for the sigma clipping and averaged sigma clipping rejection algorithm. This option and the one above are only available when the sigma, Winsorized sigma or averaged sigma clipping algorithms has been selected as the rejection algorithm.
CCD gain: CCD sensor gain in electrons per data number (e-/ADU).
CCD readout noise: CCD readout noise in electrons.
CCD scale noise: Indicates the CCD scale noise (AKA sensitivity noise). This is a dimensionless factor. Scale noise typically comes from noise introduced during flat fielding. This and the previous two parameters are only used by the CCD noise rejection algorithm.
image processing, a region of interest (ROI) is a portion of an
image that you want to filter or perform some other operation on. In
ImageIntegration, the ROI is used to restrict
ImageIntegration's rejection and integration tasks to a specific
rectangular region. This is useful for example to accelerate testing
ImageIntegration parameters, as the computational complexity of
those tasks is linearly proportional to the number of processed
Note that when a ROI is defined ImageIntegration still computes image statistics for the whole images (or retrieves them from its private cache of file data). In this way a ROI works for integrated files much like a preview works for an image in PixInsight.
Back to the Index | Copyright Note