|
Multi-Spectral Enhancement Techniques | |
|
Image Arithmetic Operations
The operations of addition, subtraction, multiplication and division are performed on two or more co-registered images of the same geographical area. These techniques are applied to images from separate spectral bands from single multispectral data set or they may be individual bands from image data sets that have been collected at different dates. More complicated algebra is sometimes encountered in derivation of sea-surface temperature from multispectral thermal infrared data (so called split-window and multichannel techniques).
Addition of images is generally carried out to give dynamic range of image that equals the input images.
Band Subtraction Operation on images is sometimes carried out to co-register scenes of the same area acquired at different times for change detection.
Multiplication of images normally involves the use of a single''real'' image and binary image made up of ones and zeros.
Band Ratioing or Division of images is probably the most common arithmetic operation that is most widely applied to images in geological, ecological and agricultural applications of remote sensing. Ratio Images are enhancements resulting from the division of DN values of one spectral band by corresponding DN of another band. One instigation for this is to iron out differences in scene illumination due to cloud or topographic shadow. Ratio images also bring out spectral variation in different target materials. Multiple ratio image can be used to drive red, green and blue monitor guns for color images. Interpretation of ratio images must consider that they are "intensity blind", i.e, dissimilar materials with different absolute reflectances but similar relative reflectances in the two or more utilised bands will look the same in the output image.
Principal Component Analysis
Spectrally adjacent bands in a multispectral remotely sensed image are often highly correlated. Multiband visible/near-infrared images of vegetated areas will show negative correlations between the near-infrared and visible red bands and positive correlations among the visible bands because the spectral characteristics of vegetation are such that as the vigour or greenness of the vegetation increases the red reflectance diminishes and the near-infrared reflectance increases. Thus presence of correlations among the bands of a multispectral image implies that there is redundancy in the data and Principal Component Analysis aims at removing this redundancy.
Principal Components Analysis (PCA) is related to another statistical technique called factor analysis and can be used to transform a set of image bands such that the new bands (called principal components) are uncorrelated with one another and are ordered in terms of the amount of image variation they explain. The components are thus a statistical abstraction of the variability inherent in the original band set.
To transform the original data onto the new principal component axes, transformation coefficients (eigen values and eigen vectors) are obtained that are further applied in alinear fashion to the original pixel values. This linear transformation is derived from the covariance matrix of the original data set. These transformation coefficients describe the lengths and directions of the principal axes. Such transformations are generally applied either as an enhancement operation, or prior to classification of data. In the context of PCA, information means variance or scatter about the mean. Multispectral data generally have a dimensionality that is less than the number of spectral bands. The purpose of PCA is to define the dimensionality and to fix the coefficients that specify the set of axes, which point in the directions of greatest variability. The bands of PCA are often more interpretable than the source data.
Decorrelation Stretch
Principal Components can be stretched and transformed back into RGB colours - a process known as decorrelation stretching.
If the data are transformed into principal components space and are stretched within this space, then the three bands making up the RGB color composite images are subjected to stretched will be at the right angles to each other. In RGB space the three-color components are likely to be correlated, so the effects of stretching are not independent for each color. The result of decorrelation stretch is generally an improvement in the range of intensities and saturations for each color with the hue remaining unaltered. Decorrelation Stretch, like principal component analysis can be based on the covariance matrix or the correlation matrix. The resultant value of the decorrelation stretch is also a function of the nature of the image to which it is applied. The method seems to work best on images of semi-arid areas and it seems to work least well where the area is covered by the image includes both land and sea.
Canonical Components
PCA is appropriate when little prior information about the scene is available. Canonical component analysis, also referred to as multiple discriminant analysis, may be appropriate when information about particular features of interest is available. Canonical component axes are located to maximize the separability of different user-defined feature types.
Hue, Saturation and Intensity (HIS) Transform
Hues is generated by mixing red, green and blue light are characterised by coordinates on the red, green and blue axes of the color cube. The hue-saturation-intensity hexcone model, where hue is the dominant wavelength of the perceived color represented by angular position around the top of a hexcone, saturation or purity is given by distance from the central, vertical axis of the hexcone and intensity or value is represented by distance above the apex of the hexcone. Hue is what we perceive as color. Saturation is the degree of purity of the color and may be considered to be the amount of white mixed in with the color. It is sometimes useful to convert from RGB color cube coordinates to HIS hexcone coordinates and vice-versa
The hue, saturation and intensity transform is useful in two ways: first as method of image enhancement and secondly as a means of combining co-registered images from different sources. The advantage of the HIS system is that it is a more precise representation of human color vision than the RGB system. This transformation has been quite useful for geological applications.
Fourier Transformation
The Fourier Transform operates on a single -band image. Its purpose is to break down the image into its scale components, which are defined to be sinusoidal waves with varying amplitudes, frequencies and directions. The coordinates of two-dimensional space are expressed in terms of frequency (cycles per basic interval). The function of Fourier Transform is to convert a single-band image from its spatial domain representation to the equivalent frequency-domain representation and vice-versa.
The idea underlying the Fourier Transform is that the grey-scale valuea forming a single-band image can be viewed as a three-dimensional intensity surface, with the rows and columns defining two axes and the grey-level value at each pixel giving the third (z) dimension. The Fourier Transform thus provides details of
The frequency of each of the scale components of the image
The proportion of information associated with each frequency component
Spatial Processing
Spatial Filtering
Spatial Filtering can be described as selectively emphasizing or suppressing information at different spatial scales over an image. Filtering techniques can be implemented through the Fourier transform in the frequency domain or in the spatial domain by convolution.
Convolution Filters
Filtering methods exists is based upon the transformation of the image into its scale or spatial frequency components using the Fourier transform. The spatial domain filters or the convolution filters are generally classed as either high-pass (sharpening) or as low-pass (smoothing) filters.
Low-Pass (Smoothing) Filters
Low-pass filters reveal underlying two-dimensional waveform with a long wavelength or low frequency image contrast at the expense of higher spatial frequencies. Low-frequency information allows the identification of the background pattern, and produces an output image in which the detail has been smoothed or removed from the original.
A 2-dimensional moving-average filter is defined in terms of its dimensions which must be odd, positive and integral but not necessarily equal, and its coefficients. The output DN is found by dividing the sum of the products of corresponding convolution kernel and image elements often divided by the number of kernel elements.
A similar effect is given from a median filter where the convolution kernel is a description of the PSF weights. Choosing the median value from the moving window does a better job of suppressing noise and preserving edges than the mean filter.
Adaptive filters have kernel coefficients calculated for each window position based on the mean and variance of the original DN in the underlying image.
High-Pass (Sharpening) Filters
Simply subtracting the low-frequency image resulting from a low pass filter from the original image can enhance high spatial frequencies. High -frequency information allows us either to isolate or to amplify the local detail. If the high-frequency detail is amplified by adding back to the image some multiple of the high frequency component extracted by the filter, then the result is a sharper, de-blurred image.
High-pass convolution filters can be designed by representing a PSF with positive centre weightr and negative surrounding weights. A typical 3x3 Laplacian filter has a kernal with a high central value, 0 at each corner, and -1 at the centre of each edge. Such filters can be biased in certain directions for enhancement of edges.
A high-pass filtering can be performed simply based on the mathematical concepts of derivatives, i.e., gradients in DN throughout the image. Since images are not continuous functions, calculus is dispensed with and instead derivatives are estimated from the differences in the DN of adjacent pixels in the x,y or diagonal directions. Directional first differencing aims at emphasising edges in image.
Frequency Domain Filters
The Fourier transform of an image, as expressed by the amplitude spectrum is a breakdown of the image into its frequency or scale components. Filtering of these components use frequency domain filters that operate on the amplitude spectrum of an image and remove, attenuate or amplify the amplitudes in specified wavebands. The frequency domain can be represented as a 2-dimensional scatter plot known as a fourier spectrum, in which lower frequencies fall at the centre and progressively higher frequencies are plotted outward.
Filtering in the frequency domain consists of 3 steps:
Fourier transform the original image and compute the fourier spectrum
Select an appropriate filter transfer function (equivalent to the OTF of an optical system) and multiply by the elements of the fourier spectrum.
Perform an inverse fourier transform to return to the spatial domain for display purposes.
Image Classification
Image Classification has formed an important part of the fields of Remote Sensing, Image Analysis and Pattern Recognition. In some instances, the classification itself may form the object of the analysis. Digital Image Classification is the process of sorting all the pixels in an image into a finite number of individual classes. The classification process is based on following assumptions:
Patterns of their DN, usually in multichannel data (Spectral Classification).
Spatial relationship with neighbouring pixels
Relationships between the data accquired on different dates.
Pattern Recognition, Spectral Classification, Textural Analysis and Change Detection are different forms of classification that are focused on 3 main objectives:
Detection of different kinds of features in an image.
Discrimination of distinctive shapes and spatial patterns
Identification of temporal changes in image
Fundamentally spectral classification forms the bases to map objectively the areas of the image that have similar spectral reflectance/emissivity characteristics. Depending on the type of information required, spectral classes may be associated with identified features in the image (supervised classification) or may be chosen statistically (unsupervised classification). Classification has also seen as a means to compressing image data by reducing the large range of DN in several spectral bands to a few classes in a single image. Classification reduces this large spectral space into relatively few regions and obviously results in loss of numerical information from the original image. There is no theoretical limit to the dimensionality used for the classification, though obviously the more bands involved, the more computationally intensive the process becomes. It is often wise to remove redundant bands before classification.
Classification generally comprises four steps:
Pre-processing, e.g., atmospheric, correction, noise suppression, band ratioing, Principal Component Analysis, etc.
Training - selection of the particular features which best describe the pattern
Decision - choice of suitable method for comparing the image patterns with the target patterns.
Assessing the accuracy of the classification
The informational data are classified into systems:
Supervised
Unsupervised
Supervised Classification
In this system each pixel is supervised for the categorization of the data by specifying to the computer algorithm, numerical descriptors of various class types. There are three basic steps involved in typical supervised classification
Training Stage
The analyst identifies the training area and develops a numerical description of the spectral attributes of the class or land cover type. During the training stage the location, size, shape and orientation of each pixel type for each class.
Classification Stage
Each pixel is categorised into landcover class to which it closely resembles. If the pixel is not similar to the training data, then it is labeled as unknown. Numerical mathematical approaches to the spectral pattern recognition have been classified into various categories.
Measurements on Scatter Diagram
Each pixel value is plotted on the graph as the scatter diagram indicating the category of the class. In this case the 2-dimensional digital values attributed to each pixel is plottes on the graph
Minimum Distance to Mean Classifier/Centroid Classifier
This is a simple classification strategies. First the mean vector for each category is determined from the average DN in each band for each class. An unknown pixel can then be clas
Classification Stage |