Point Cloud Classification

Contact person: Alexander Velizhev (avelizhev@graphics.cs.msu.ru)

We develop algorithms for analysis of 3D point clouds obtained by laser scanners (LiDARs), specifically we address the problem of semantic segmentation. A class label from the pre-defined set is assigned to each point of the cloud. Point classifier is tuned by means of machine learning techniques. It uses local point features as well as contextual information. Variants of Associative Markov networks (AMNs) have been commonly used to solve this problem. In contrast, we use the general form of Markov networks (so-called Non-associative Markov Networks). This model is more flexible: instead of smoothing the labelling, it encourages heterogenous assignments in some cases (e.g. tree points tend to be above the ground). The model parameters are tuned by the combination of Random Forest and structured SVM with RBF kernels. We evaluate the performance of our algorithm on the data obtained with airborne and terrestrial laser scanning systems.


The workflows for learning and classification stages are similar, so we describe them in parallel. First, a spatial index is built over test/train set via the LidarK library, which also defines over-segmentation. Then a graph over segment medoids is built. After that, features for unary and pairwise potentials are computed. We train/apply Random Forest classifier on the subsampled set of local point features from the train/test set. There are two options how to proceed futher. A fast but less accurate method is to estimate pairwise potentials with naïve Bayes classification (see the PCV 2010 paper). Alternatively, kernelized structured SVM is trained to discover the dependency of MRF unary and pairwise potentials on Random Forest outputs and pairwise features, respectively. In this case the dependency between point labels is taken into account during training, which leads to the more accurate model (3DIMPVT 2011 paper). On the test stage TRW-S algorithm is used for MAP inference for both models.


The airborne LIDAR scans used in our PCV 2010 paper: Dataset A, zipped asc Dataset B, zipped asc File format description, plain text
Both data sets are divided into train and test parts of approximately equal sizes (~1M points each). Points are saved in text format, three real-valued coordinates of each point are written in each line. For the train sets, the forth number in each line indicates a class label.
The aerial and road scans used in our 3DIMPVT 2011 paper (along with the results of the methods used for comparison): Supplementary material, zipped ascs
The aerial train and test parts consist of approx. 100K points each, the road scans are about of 400K and 1M points, respectively. For each method used for experiments the inferred and ground truth results are saved, along with RGB representation of the inferred class label. For information about the file formats see the readme file within the archive.


In comparison to associative Markov networks our method produces more accurate results without oversmoothing artifacts:

Results on a part of the "dataset A" airborne scan:

Legend: red — ground, black — building, navy — car, green — tree, cyan — low-vegetation.


Point Cloud Library — an open-source library for 3D point cloud processing