|
|
Real-time Markov random field based ground segmentation of 3D Lidar data |
ZHU Zhu, LIU Ji-lin |
Department of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China |
|
|
Abstract A graph based ground segmentation approach was presented in order to play a real-time ground segmentation from 3D Lidar data in different kinds of scenes with high quality, After filtering error 3D points and fixing position and posture of point clouds, the algorithm firstly segmented the projection of each scan line on x -y plane by max blurred line segments, and precisely located the line segment nodes by dominant points detection. Taking advantage of lidar original data structure, an unidirectional graph based line segment nodes was built for Markov Random Field. A potential function was calculated through analyzing line segmentation features, including length, gradient, distance, angle and vertical displacement between adjacent line segments. Then the energy function was solved by graph-cut. All line segments were finally labeled with two categories (ground and obstacle). Experiments were taken in both flat and rough rural area. The results demonstrate that the proposed algorithm has higher accuracy of ground segmentation than existing methods and performs higher stability in bumpy rural area.
|
Published: 28 August 2015
|
|
基于马尔科夫随机场的三维激光雷达路面实时分割
针对多类型场景下三维激光雷达地面高准确性实时提取问题,提出一种基于马尔科夫随机场的路面分割算法.算法对三维点云进行滤波和位姿修正,采用基于最大模糊线段法对每条激光雷达扫描线在x-y平面上的投影进行分割,使用角点检测准确定位每条线段端点.利用原始雷达数据结构信息,建立以线段为节点的无向图马尔科夫随机场,通过分析线段长度、相邻线段间的距离、梯度以及垂直高度差等特征,构建能量方程,用图分割的方法求出最优解,并将线段标记为2类:地面区域和障碍区域.分别在城市平坦路面和乡村起伏道路场景下进行实验,结果表明:与现有算法相比,本算法地面提取准确率更高,在颠簸的乡村道路区域具有更高的稳定性.
|
|
[1] GLENNIE C. Calibration and kinematic analysis of the Velodyne HDL-64E S2 Lidar sensor[J]. Photogrammetric Engineering and Remote Sensing, 2012, 78(4): 339-347.
[2] HASELICH M, BING R, PAULUS D. Calibration of multiple cameras to a 3D laser range finder [C]∥ International Conference on Emerging Signal Processing Applications (ESPA). Las Vegas:IEEE,2012: 25-28.
[3] 杨飞, 朱株, 龚小谨, 等. 基于三维激光雷达的动态障碍实时检测与跟踪[J]. 浙江大学学报:工学版, 2012, 46(9):1565-1571.
YANG Fei, Zhu Zhu, GONG Xiao-jin, et al. Real-time dynamic obstacle detection and tracking using 3D Lidar [J]. Journal of Zhejiang University: Engineering Science, 2012, 46(9):1565-1571.
[4] KAMMEL S, PITZER B. Lidar-based lane marker detection and mapping[C]∥Intelligent Vehicles Symposium. Eindhoven: IEEE, 2008: 1137-1142.
[5] HIMMELSBACH M, HUNDELSHAUSEN F, WUENSCHE H. Fast segmentation of 3D point clouds for ground vehicles [C]∥Intelligent Vehicles Symposium (IV). San Diego: IEEE, 2010: 560-565.
[6] GUO C, SATO W, HAN L, et al. Graph-based 2D road representation of 3D point clouds for intelligent vehicles[C]∥Intelligent Vehicles Symposium(IV). Detroit: IEEE, 2011: 715-721.
[7] MONTEMERLO M, BECHER J, BHAT S, et al. Junior: The stanford entry in the urban challenge [J]. Journal of Field Robotics, 2008, 25(9): 569-597.
[8] MOOSMANN F, PINK O, STILLER C. Segmentation of 3D lidar data in non-flat urban environments using a local convexity criterion [C]∥Intelligent Vehicles Symposium. Xi’an: IEEE, 2009: 215-220.
[9] DOUILLARD B, UNDERWOOD J, KUNTZ N, et al. On the segmentation of 3D LIDAR point clouds[C]∥ International Conference on Robotics and Automation (ICRA). Shanghai: IEEE, 2011: 2798-2805.
[10] DEBLED RENNESSON I, FESCHET F, ROUYER-DEGLI J. Optimal blurred segments decomposition in linear time [C] ∥ Discrete Geometry for Computer Imagery. Berlin Heidelberg:Springer, 2005: 371-382.
[11] BESAG J. Spatial interaction and the statistical analysis of lattice systems [J]. Journal of the Royal Statistical Society. Series B (Methodological), 1974, 36(2): 192-236.
[12] BOYKOV Y, VEKSLER O, ZABIH R. Fast approximate energy minimization via graph cuts[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2001, 23(11): 1222-1239.
[13] BOYKOV Y, KOLMOGOROV V. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(9): 1124-1137. |
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
|
Shared |
|
|
|
|
|
Discussed |
|
|
|
|