Please wait a minute...
Journal of ZheJiang University (Engineering Science)  2021, Vol. 55 Issue (2): 402-409    DOI: 10.3785/j.issn.1008-973X.2021.02.021
    
Fast visual SLAM method based on point and line features
Xin MA(),Xin-wu LIANG*(),Ji-yuan CAI
School of Aeronautics and Astronautics, Shanghai Jiao Tong University, Shanghai 200240, China
Download: HTML     PDF(1768KB) HTML
Export: BibTeX | EndNote (RIS)      

Abstract  

A fast simultaneous localization and mapping (SLAM) algorithm based on point and line features was proposed in order to improve the localization accuracy and the robustness of SLAM system under RGB-D cameras in low-textured scenes. During the tracking of non-keyframes, point feature matching was performed based on descriptors, and line feature matching was performed based on geometric constraints. When a new keyframe was inserted, the descriptors of the line features were calculated to complete the line feature matching between the keyframes, and the line feature triangulation algorithm was used to generate map lines. The real-time performance of the SLAM system was improved by reducing the amount of calculation in the line feature matching process. In addition, virtual right-eye lines were constructed using the depth measurement information of line features, and a new method for calculating reprojection errors of line features was proposed. Experimental results on public datasets showed that compared with mainstream methods such as ORB-SLAM2, the proposed algorithm improved the localization accuracy of the RGB-D SLAM system in low-textured scenes. The time efficiency of the proposed algorithm was improved by about 20% compared with traditional SLAM method combining point and line features.



Key wordsvisual simultaneous localization and mapping (SLAM)      point and line features      geometric constraints      time efficiency      RGB-D     
Received: 12 March 2020      Published: 09 March 2021
CLC:  TP 242  
Fund:  国家自然科学基金资助项目(61673272)
Corresponding Authors: Xin-wu LIANG     E-mail: maxin1900@sjtu.edu.cn;xinwuliang@sjtu.edu.cn
Cite this article:

Xin MA,Xin-wu LIANG,Ji-yuan CAI. Fast visual SLAM method based on point and line features. Journal of ZheJiang University (Engineering Science), 2021, 55(2): 402-409.

URL:

http://www.zjujournals.com/eng/10.3785/j.issn.1008-973X.2021.02.021     OR     http://www.zjujournals.com/eng/Y2021/V55/I2/402


基于点线特征的快速视觉SLAM方法

为了提高RGB-D相机同时定位与地图构建(SLAM)系统在弱纹理场景下的定位精度和鲁棒性,提出快速的基于点线特征的SLAM方法. 在非关键帧的追踪过程中,基于描述子进行点特征匹配,基于几何约束进行线特征匹配;当插入新的关键帧时,计算线特征描述子以完成关键帧间的线特征匹配,并利用线特征三角化算法生成地图线. 通过降低线特征匹配过程运算量来提高SLAM系统的实时性. 此外,利用线特征的深度测量信息构造虚拟右目线段,并提出新的线特征重投影误差计算方法. 在公开数据集上的实验结果表明,与ORB-SLAM2等主流方法相比,所提算法提高了RGB-D SLAM系统在弱纹理场景下的定位精度;与传统点线特征结合的SLAM方法相比,所提算法的时间效率提高了约20%.


关键词: 视觉同时定位与地图构建(SLAM),  点线特征,  几何约束,  时间效率,  RGB-D 
Fig.1 Algorithm framework of proposed SLAM system
Fig.2 Reprojection error of line feature
Fig.3 Line feature reprojection error combining pixel and depth measurement information
Fig.4 Triangulation algorithm of line feature
Fig.5 Line feature matching based on LBD descriptor
Fig.6 Line feature matching based on geometric constraints and mismatch culling
Fig.7 Comparison of tracking time between proposed method and PL-SLAM on sequence fr3/ntn
ms
线程 操作 PL-SLAM 本研究算法
追踪 特征提取 57.38 46.27
追踪局部地图 22.87 15.48
局部地图 关键帧插入 32.59 24.48
地图特征剔除 1.02 0.97
地图特征生成 33.01 17.50
局部地图优化 334.30 270.68
关键帧剔除 16.66 9.01
Tab.1 Tracking and local mapping times of proposed method and PL-SLAM on sequence fr1/xyz
m
序列 本研究算法 PL-SLAM ORB-SLAM2 LPVO DVO-SLAM
lr_kt0 0.006 0.010 0.008 0.015 0.108
lr_kt1 0.010 0.025 0.135 0.039 0.059
lr_kt2 0.018 0.023 0.029 0.034 0.375
lr_kt3 0.014 0.013 0.014 0.102 0.433
of_kt0 0.035 0.046 0.056 0.061 0.244
of_kt1 0.022 0.035 0.058 0.052 0.178
of_kt2 0.027 0.034 0.025 0.039 0.099
of_kt3 0.018 0.036 0.050 0.030 0.079
fr1/xyz 0.009 0.010 0.010 ? 0.011
fr1/room 0.040 0.056 0.059 ? 0.053
fr1/360 0.130 0.161 0.228 ? 0.083
fr3/ntn 0.012 0.018 0.024 ? 0.018
fr3/ntf 0.025 0.038 0.051 ? ?
fr3/stf 0.009 0.014 0.013 0.174 0.048
Tab.2 Comparison of ATE among different methods on ICL-NUIM and TUM RGB-D datasets
Fig.8 Comparison of estimated trajectories among proposed method, PL-SLAM and ORB-SLAM2
[1]   KLEIN G, MURRAY D. Parallel tracking and mapping for small AR workspaces [C]// Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality. Washington DC: IEEE, 2007: 1-10.
[2]   MUR-ARTAL R, MONTIEL J M M, TARDOS J D ORB-SLAM: a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31 (5): 1147- 1163
doi: 10.1109/TRO.2015.2463671
[3]   VAKHITOV A, FUNKE J, MORENO-NOGUER F. Accurate and linear time pose estimation from points and lines [C]// European Conference on Computer Vision. Amsterdam: Springer, Cham, 2016: 583-599.
[4]   ZUO X, XIE X, LIU Y, et al. Robust visual SLAM with point and line features [C]// 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems. Vancouver: IEEE, 2017: 1775-1782.
[5]   PUMAROLA A, VAKHITOV A, AGUDO A, et al. PL-SLAM: real-time monocular visual SLAM with points and lines [C]// 2017 IEEE International Conference on Robotics and Automation. Singapore: IEEE, 2017: 4503-4508.
[6]   GOMEZ-OJEDA R, MORENO F A, ZU?IGA-NO?L D, et al Pl-slam: a stereo slam system through the combination of points and line segments[J]. IEEE Transactions on Robotics, 2019, 35 (3): 734- 746
[7]   HE Y, ZHAO J, GUO Y, et al Pl-vio: tightly-coupled monocular visual-inertial odometry using point and line features[J]. Sensors, 2018, 18 (4): 1159
doi: 10.3390/s18041159
[8]   WANG R, DI K, WAN W, et al Improved point-line feature based visual SLAM method for indoor scenes[J]. Sensors, 2018, 18 (10): 3559
doi: 10.3390/s18103559
[9]   GOMEZ-OJEDA R, GONZALEZ-JIMENEZ J. Geometric-based line segment tracking for HDR stereo sequences [C]// 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. Madrid: IEEE, 2018: 69-74.
[10]   KERL C, STURM J, CREMERS D. Dense visual SLAM for RGB-D cameras [C]// 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. Tokyo: IEEE, 2013: 2100-2106.
[11]   MUR-ARTAL R, TARDóS J D Orb-slam2: an open-source slam system for monocular, stereo, and RGB-D cameras[J]. IEEE Transactions on Robotics, 2017, 33 (5): 1255- 1262
doi: 10.1109/TRO.2017.2705103
[12]   VON GIOI R G, JAKUBOWICZ J, MOREL J M, et al LSD: afast line segment detector with a false detection control[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008, 32 (4): 722- 732
[13]   ZHANG L, KOCH R An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency[J]. Journal of Visual Communication and Image Representation, 2013, 24 (7): 794- 805
doi: 10.1016/j.jvcir.2013.05.006
[14]   GáLVEZ-LóPEZ D, TARDOS J D Bags of binary words for fast place recognition in image sequences[J]. IEEE Transactions on Robotics, 2012, 28 (5): 1188- 1197
doi: 10.1109/TRO.2012.2197158
[15]   HARTLEY R, ZISSERMAN A. Multiple view geometry in computer vision [M]. Cambridge: Cambridge University Press, 2003.
[16]   STURM J, ENGELHARD N, ENDRES F, et al. A benchmark for the evaluation of RGB-D SLAM systems [C]// 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. Vilamoura: IEEE, 2012: 573-580.
[17]   HANDA A, WHELAN T, MCDONALD J, et al. A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM [C]// 2014 IEEE International Conference on Robotics and Automation (ICRA). Hong Kong: IEEE, 2014: 1524-1531.
[1] FEI Ting ting,GONG Xiao jin. Nonparametric RGB-D scene parsing based on Markov random field model[J]. Journal of ZheJiang University (Engineering Science), 2016, 50(7): 1322-1329.