Generally, the classification of image data is needed to group small areas with similar reflection values and form bigger classes. This allows us to highlight useful information and ignore less necessary information. By doing this, we can group flooded gravel pits, streams and rivers in one class named water, or living areas and parking lots in the class constructed area. Now, interpretation is quite easy!
Swipe of Dubai: The classified image can be placed over the satellite image with the computer mouse
Basically, two types of classification can be distinguished to address this problem: Unsupervised classification which is carried out almost automatically (see right hand side), and supervised classification (left hand side).
Comparing supervised and unsupervised classification.
Both classification types can be carried out in a pixel-oriented or object-oriented way. Pixel-oriented classification procedures examine the spectral characteristics of every single pixel, and assign pixels with similar characteristics in texture and colour to the same class. Object-oriented classification copies human understanding: The satellite image is divided into classes by examining neighbourhood relations. The rule is: In all likelihood, a pixel belongs to the same class as its adjacent pixels. Moreover, its spectral characteristics clearly dissociate it from other classes.
Comparing a pixel-oriented and an object-oriented classification. Don't be surprised: the object-based classification has not distinguished between constructed area (red) and sealed surface (black).
Object-oriented classification results in connected areas of pixels assigned to the same class characterised by distinct class boundaries, whereas pixel-oriented classification reveals a salt-and-pepper pattern in class border areas.
The table below includes some examples of classification procedures. They will be explained on the next pages.
Is classification perfect? No, it certainly is not. In order to find out how many and which errors occurred in the classified satellite data set, a so-called accuracy assessment is carried out.
The simplest way of determining the classification quality is visual comparison. Is this logical? Can a small piece of gravel pit be situated in a jungle? Is it consistent that a pixel in the inner city is classified as "desert"? Based on our own experience, we have a first impression on whether the classification is true or not. Using further information, i.e. aerial images with an explicitly high resolution a.s.o., pixels can be evaluated better.
Example of classification errors: Looking hard at the image and comparing it to an aerial image, we can spot those false islands in the sea.
Computer-aided accuracy assessment selects pixels automatically using statistical parameters. The researcher has to compare those pixels with other sources of information. If the pixel is assigned to the right class, this is a bonus point for the classification at hand. But if the classification has been wrong, we have to tell the program which class it must belong to. After all chosen pixels have been compared to other sources of information, we get a table containing the number of pixels in each class, how many pixels were assigned correctly, and how many pixels belong to another class. With this table, we can determine, how accurate the classification is and whether one or more classes have to be re-defined.
The result of the accuracy assessment depends on the number of pixels checked and where these pixels are located. The closer they are together, the higher the possibility that the pixels are assigned to the right class. Therefore, the pixels have to be evenly distributed.
The classification of image data aggregates small areas with similar reflection values and, thereby, forms bigger classes. This can be done automatically or manually, pixel-oriented or object-oriented. As always, it is important to assess the accuracy of the classification product.