Thursday, April 14, 2016

Object-Based Classification

Introduction:

Object-Based classification methods are the most powerful methods currently in general use by the remote sensing community world-wide. They are substantially more robust than other methods, because classification isn't only based on pixel values, but also on: texture, compactness, and smoothness. In this lab, I explored two different object-based classification methods: 'Random Forest' and 'Support Vector Machines'(SVM). Random forest creates a specified number (I created 300) of classifiers which each create random subsets of training data and classifies them. The overall pixel classification is based on the majority of the values chosen by the classifiers. Support Vector Machines determines decision boundaries to produce optimal separation between classes, via multiple iterations.

Methods:

Before either classifier could be used, segments were created from a Landsat 7 image. The objects were created with a shape/color percent of 30%/70%, and a compactness/smoothness percentage of 50%/50%. Once the segments were created, training samples were selected from the segments. After enough training samples were collected, the images were classified.

Before classification could be performed, I created a training function to identify the classification parameters. The Random Forest classifier was created and trained, with a maximum number of 300 trees (individual classifiers).  The classified image was created, using the parameters identified by the training function.

Before SVM classification could be performed, I created a training function to identify the classification parameters. The Support Vector Machines classifier was created and trained to use a linear kernel. The classified image was created, using the parameters identified by the training function.

Results:

The Random forest classifier created a substantially more accurate output than the Support vector machines classifier. The disparity between the two is the most clearly visible in the extremely built up parts of Eau Claire, which SVM classified as ‘bare earth’ (Figure 1). As SVM only uses one classifier, it was possible for the introduction of more error than random forest’s 300 classifiers. 

Figure 1: Random Forest vs. Support Vector Machines
Random forest performed much better in urban areas than SVM

No comments:

Post a Comment