From Pixels to Objects
In order to understand pixel-oriented classification of spectral images of remote sensing, we have to recall what these images truly are: They are raster data, each cell containing intensity values of the sunlight reflected from an area and recorded by a remote sensing sensor. For this, the sensor has several bands, e.g. one for the red and one for the blue range of sunlight.
Scatter plot: The pixel values in band A and band B are plotted. The result is a characteristic "flag".
The figure above displays such a scatter plot. Assuming that pixels belonging to the same class have the same spectral characteristics as well, they have to be close to each other in the scatter plot. We try to determine the borders within this point cloud with the help of different classification procedures. Imagining the coordinate system with the 7 axis it should have in order to depict all the spectral information given, we can see how difficult this proves to be!
A class for every pixel
The pixel-oriented classification procedure analyse every single pixel according to its spectral characteristics. The grey-scale value in every band of one pixel is looked at and compared to the spectral signature of the other pixels. The class distribution is subject to statistical values (unsupervised, automatic procedures) or manual definition of reference signatures for every class (supervised procedures). Shapes, edges, neighbourhoods or coherent areas are not taken into consideration, resulting in a characteristic salt-and-pepper pattern. This means that a single pixel can be classified as pit mining but is surrounded by forest completely. Such illogicalness has to be observed every time a pixel-oriented classification is carried out.
The following lines will introduce you to the pixel-oriented classification procedures in particular. Here, ISODATA represents the unsupervised classification scheme. The supervised classification procedures are shown exemplarily for the minimum distance and maximum likelihood classification scheme.
Unsupervised pixel-oriented classification
Unsupervised classification is carried out by a computer, based on statistical techniques only. It serves as a basis for unbiased analyses of trends and structures. Unsupervised classification is extremely helpful if the study area is not well-known or the number of distinguishable classes has to be determined.
The best-known unsupervised classification algorithm is the so-called ISODATA. The acronym ISODATA stands for Iterative Self-Organizing Data Analysis Technique. The researcher has to determine the number of classes only; thereafter, the computer calculates on its own which pixels belong to which class. How does it work?
This procedure assumes a normal distribution of the values in every spectral band and determines the centre of each class. Afterwards, the distance of every pixel to those centres is calculated. The pixel is assigned to the class showing the smallest distance between the class centre and the pixel.
Move the black spot - the class assignments are re-calculated.
But this doesn't end the process! Iterative means that this procedure is repeated again and again. Now, the computer calculates the centre of the point cloud assigned to one class - and the procedure starts again, meaning: distance calculation, assigning pixels to classes, re-calculate class centre a.s.o.. This stops if one of the stop condition is met: either only a small number of pixels changes class membership or a threshold is exceeded. As a result of the ISODATA analysis, every pixel is assigned to the class it has closest resemblance to (in spectral terms). But is this the best allocation?
Comparing unsupervised classification to the original image
Because this procedure does not depend on humans, we cannot include additional prior knowledge in the classification which is not in the pixel data itself. On the one hand, the resulting impartiality is an advantage, on the other hand, classes can be separated which go hand in hand for the research question. The classification has to be assessed and interpreted by the researcher afterwards in order to get information relevant to the research question.
Supervised pixel-oriented classification
The starting point for supervised classification is the collection of so-called training areas. These areas are partitions of the original image chosen as an example for a class. For this, areas with known class affiliation are selected, e.g. a forest area is an appropriate training area for the class "tree" as well as a lake is one for the class "water". For every class, several training areas are needed in order to distinguish classes exactly and to capture spectral variances.
The image shows exemplary training areas for 4 different classes. In order to classify an image correctly, we have to have more than one training area per class. We average over all training areas of one class and use this value for assignment during the classification process.
Training areas - and what comes next?
After having defined the classes spectrally, the other pixels outside the training areas have to be assigned to these classes. In order to understand the functional principle of the attribution, we have to take a look at the reflection values of the training pixels in a scatter plot (see above).
Calculation the centre of these point clouds, two different techniques can be used to assign left-over pixels to the right classes. Two processes will be presented in the following section, namely the maximum likelihood and minimum distance procedure.
How can pixels be assigned to classes?
One procedure to assign pixels to classes is the minimum distance classification scheme. The pixel will be assigned to the class closest to it.
Minimum distance calculation: Moving the black spot will cause class re-calculation.
Another technique is the maximum likelihood classification scheme. For this, the probabilities to belong to a certain class are calculated for every pixel and every class. The pixel will be assigned to the class with the highest probability value.
Move over the points to see the class member explanation.
Our examples show that one pixel can be associated with different classes according to the different classification procedures. So we have to check the results and, sometimes, have to re-classify!
What does an image look like after supervised classification?
The example above shows an image matrix before and after classification. As you can see, the image is clearer and easier to interpret after classification.
The pixel-oriented classification aggregates pixels with the same spectral characteristics. In order to determine the characteristics of a class, we can use manual or automatic procedures. Classifying manually, we can use different techniques to aggregate the pixels. We can distinguish between techniques using the spectral distances between pixels or techniques using the probability of being a member of a certain class. Training areas are used as spectral references.