Please wait a minute...
浙江大学学报(工学版)  2022, Vol. 56 Issue (6): 1062-1070    DOI: 10.3785/j.issn.1008-973X.2022.06.002
智能机器人     
基于重投影深度差累积图与静态概率的动态RGB-D SLAM算法
林凯(),梁新武*(),蔡纪源
上海交通大学 航空航天学院,上海 200240
Dynamic RGB-D SLAM algorithm based on reprojection depth difference cumulative map and static probability
Kai LIN(),Xin-wu LIANG*(),Ji-yuan CAI
School of Aeronautics and Astronautics, Shanghai Jiao Tong University, Shanghai 200240, China
 全文: PDF(1535 KB)   HTML
摘要:

为了提高同时定位与建图(SLAM)系统在动态场景下的定位精度和鲁棒性,提出新的RGB-D SLAM算法. 建立基于重投影深度差值的累积模型,分割图像的动静态区域;为了避免动态区域过分割,先剔除与匹配地图点欧氏距离过大的动态区域特征点,再根据t分布估计其余特征点的静态概率;将静态区域特征点和动态区域的疑似静态点以不同权重加入位姿优化,得到提纯后的位姿. 在公开数据集上的实验结果表明,所提算法在动态场景下较改进前的RGB-D ORB-SLAM2算法的定位精度提升96.1%,较其他动态SLAM算法提升31.2%,有效提高了视觉SLAM系统在动态环境下的定位精度和鲁棒性.

关键词: 动态环境视觉SLAMRGB-D相机重投影深度差累积图静态概率    
Abstract:

To improve the localization accuracy and the robustness of simultaneous localization and mapping (SLAM) in dynamic scenes, a new RGB-D SLAM algorithm was proposed. Firstly, a cumulative model based on the reprojection depth difference was built to segment the dynamic and static region in the image. Secondly, to avoid over-segmentation, the feature points in the dynamic region whose Euclidean distances were too large from the matching map point were eliminated. The static probabilities of other feature points were estimated according to the t-distribution. Finally, the feature points in the static region and the suspected static points in the dynamic region were added into the pose optimization with different weights to refine the pose. Experiments with public datasets showed that in dynamic scenes, the localization accuracy of the proposed method was improved by 96.1% compared with RGB-D ORB-SLAM2 and 31.2% compared with other dynamic SLAM algorithms. The localization accuracy and robustness of the visual SLAM system in dynamic scenes were effectively improved.

Key words: dynamic environment    visual SLAM    RGB-D camera    reprojection depth difference cumulative map    static probability
收稿日期: 2021-04-28 出版日期: 2022-06-30
CLC:  TP 242  
基金资助: 国家自然科学基金资助项目(62173230)
通讯作者: 梁新武     E-mail: link013@sjtu.edu.cn;xinwuliang@sjtu.edu.cn
作者简介: 林凯(1995—),男,硕士生,从事计算机视觉研究. orcid.org/0000-0003-2401-4354. E-mail: link013@sjtu.edu.cn
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
作者相关文章  
林凯
梁新武
蔡纪源

引用本文:

林凯,梁新武,蔡纪源. 基于重投影深度差累积图与静态概率的动态RGB-D SLAM算法[J]. 浙江大学学报(工学版), 2022, 56(6): 1062-1070.

Kai LIN,Xin-wu LIANG,Ji-yuan CAI. Dynamic RGB-D SLAM algorithm based on reprojection depth difference cumulative map and static probability. Journal of ZheJiang University (Engineering Science), 2022, 56(6): 1062-1070.

链接本文:

https://www.zjujournals.com/eng/CN/10.3785/j.issn.1008-973X.2022.06.002        https://www.zjujournals.com/eng/CN/Y2022/V56/I6/1062

图 1  基于重投影深度差累积图与静态概率的动态RGB-D SLAM算法框架图
图 2  匹配点对的欧氏距离
图 3  静态概率估计算法流程图
图 4  动态区域检测结果
图 5  动态区域的静态概率估计结果
序列 $R_{\rm{t}}$/(m·s ?1) $R_{\rm{r}}$/((°)·s ?1)
eDVO [ 14] ORB-VO eORB-VO 本研究 eDVO [ 14] ORB-VO eORB-VO 本研究
static_board 0.292 0.034 0.022 0.024 4.839 0.018 0.009 0.010
static_construct 0.153 0.033 0.003 0.003 3.821 0.017 0.001 0.001
dynamic_board 0.111 X 0.027 0.028 1.939 X 0.031 0.031
dynamic_man1 0.157 0.039 0.025 0.023 4.108 0.053 0.053 0.053
fr3/sitting_xyz 0.073 0.008 0.010 0.009 1.860 0.008 0.008 0.008
fr3/walking_static 0.217 0.016 0.013 0.012 0.197 0.008 0.006 0.006
fr3/walking_xyz 0.259 0.028 0.020 0.015 4.069 0.016 0.012 0.010
表 1  LARR和TUM RGB-D数据集上不同算法的相对位姿误差(均方根误差)对比
图 6  本研究算法与ORB-VO相对位姿误差曲线对比图
m
序列 ATE
ORB-SLAM2 eORB-SLAM2 本研究
static_board 1.442 6 0.056 4 0.077 4
static_construct 1.114 7 0.004 6 0.004 7
dynamic_board 1.151 0 0.021 3 0.021 6
dynamic_man1 0.297 7 0.035 1 0.034 2
fr3/sitting_xyz 0.009 2 0.017 2 0.010 5
fr3/walking_static 0.393 1 0.021 0 0.012 7
fr3/walking_xyz 0.676 8 0.426 6 0.054 4
fr3/walking_halfsphere 0.648 3 0.119 4 0.047 4
fr3/walking_rpy 0.787 1 0.050 9 0.033 1
平均 0.724 5 0.111 7 0.028 6
表 2  LARR和TUM RGB-D数据集上本研究算法与ORB-SLAM2的绝对轨迹误差(均方根)对比
图 7  本研究算法与ORB-SLAM2估计轨迹对比图
TUM_fr3序列 ATE
半直接法 [ 3] Detect-SLAM [ 6] DS-SLAM [ 8] 改进几何与运动约束 [ 11] DSLAM [ 12] Static-weight [ 13] 本研究
sitting_static 0.006 4 0.044 6 0.009 6 0.006 9
sitting_xyz 0.011 3 0.020 1 0.018 6 0.009 1 0.039 7 0.010 5
sitting_halfsphere 0.062 0 0.023 1 0.027 1 0.023 5 0.043 2 0.017 0
sitting_rpy 0.038 5 0.022 5 0.026 1
walking_static 0.008 0 0.008 1 0.013 1 0.010 8 0.026 1 0.012 7
walking_xyz 0.037 1 0.024 1 0.024 7 0.035 4 0.087 4 0.060 1 0.054 4
walking_halfsphere 0.040 9 0.051 4 0.030 3 0.028 5 0.035 4 0.048 9 0.047 4
walking_rpy X 0.295 9 0.444 2 0.096 6 0.160 8 0.179 1 0.033 1
平均 X 0.082 9 0.102 7 0.037 8 0.044 9 0.066 2 0.026 0
表 3  TUM RGB-D数据集上不同算法的绝对轨迹误差(均方根误差)对比
ms
算法 操作 t ave
sitting_static walking_static
ORB-SLAM2 跟踪 26.98 39.76
本研究 跟踪 379.66 412.07
动静态分割 350.97 372.18
静态概率估计 1.00 1.03
表 4  本研究算法与ORB-SLAM2的时间效率对比
1 FISCHLER M A, BOLLES R C Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography[J]. Communications of the ACM, 1981, 24 (6): 381- 395
doi: 10.1145/358669.358692
2 TRIGGS B, MCLAUCHLAN P F, HARTLEY R I, et al. Bundle adjustment: a modern synthesis [M]// TRIGGS B, ZISSERMAN A, SZELISKI R. Vision algorithms: theory and practice. [S. l.]: Springer, 1999: 298-372.
3 高成强, 张云洲, 王晓哲, 等 面向室内动态环境的半直接法RGB-D SLAM算法[J]. 机器人, 2019, 41 (3): 372- 383
GAO Cheng-qiang, ZHANG Yun-zhou, WANG Xiao-zhe, et al Semi-direct RGB-D SLAM algorithm for dynamic indoor environments[J]. Robot, 2019, 41 (3): 372- 383
4 SUN Y, LIU M, MENG M Q H Improving RGB-D SLAM in dynamic environments: a motion removal approach[J]. Robotics and Autonomous Systems, 2017, 89: 110- 122
doi: 10.1016/j.robot.2016.11.012
5 魏彤, 李绪 动态环境下基于动态区域剔除的双目视觉SLAM算法[J]. 机器人, 2020, 42 (3): 336- 345
WEI Tong, LI Xu Binocular vision SLAM algorithm based on dynamic region elimination in dynamic environment[J]. Robot, 2020, 42 (3): 336- 345
6 ZHONG F, WANG S, ZHANG Z, et al. Detect-SLAM: making object detection and SLAM mutually beneficial [C]// 2018 IEEE Winter Conference on Applications of Computer Vision. Lake Tahoe: IEEE, 2018: 1001-1010.
7 BESCOS B, FACIL J M, CIVERA J, et al DynaSLAM: tracking, mapping, and inpainting in dynamic scenes[J]. IEEE Robotics and Automation Letters, 2018, 3 (4): 4076- 4083
doi: 10.1109/LRA.2018.2860039
8 YU C, LIU Z, LIU X, et al. DS-SLAM: a semantic visual SLAM towards dynamic environments [C]// 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. Madrid: IEEE, 2018: 1168-1174.
9 YUAN X, CHEN S. SaD-SLAM: a visual SLAM based on semantic and depth information [C]// 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. Las Vegas: IEEE, 2020: 4930-4935.
10 ZHANG T, ZHANG H, NAKAMURA Y, et al. FlowFusion: dynamic dense RGB-D SLAM based on optical flow [C]// 2020 IEEE International Conference on Robotics and Automation. Paris: IEEE, 2020: 7322-7328.
11 艾青林, 刘刚江, 徐巧宁 动态环境下基于改进几何与运动约束的机器人RGB-D SLAM算法[J]. 机器人, 2021, 43 (2): 167- 176
AI Qing-lin, LIU Gang-jiang, XU Qiao-ning An RGB-D SLAM algorithm for robot based on the improved geometric and motion constraints in dynamic environment[J]. Robot, 2021, 43 (2): 167- 176
12 DAI W, ZHANG Y, LI P, et al RGB-D SLAM in dynamic environments using point correlations[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 44 (1): 373- 389
13 LI S, LEE D RGB-D SLAM in dynamic environments using static point weighting[J]. IEEE Robotics and Automation Letters, 2017, 2 (4): 2263- 2270
doi: 10.1109/LRA.2017.2724759
14 KIM H, KIM P, KIM H J. Moving object detection for visual odometry in a dynamic environment based on occlusion accumulation [C]// 2020 IEEE International Conference on Robotics and Automation. Paris: IEEE, 2020: 8658-8664.
15 MUR-ARTAL R, TARDOS J D ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras[J]. IEEE Transactions on Robotics, 2017, 33 (5): 1255- 1262
doi: 10.1109/TRO.2017.2705103
16 胡泽周 Kinect深度传感器深度误差分析和修正方法的研究[J]. 测绘通报, 2019, (Suppl.2): 239- 241
HU Ze-zhou Research on depth error analysis and correction method of Kinect depth sensor[J]. Bulletin of Surveying and Mapping, 2019, (Suppl.2): 239- 241
17 KERL C, STURM J, CREMERS D. Dense visual SLAM for RGB-D cameras [C]// 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. Tokyo: IEEE, 2014: 2100-2106.
18 KUMMERLE R, GRISETTI G, STRASDAT H, et al. g2o: a general framework for graph optimization [C]// 2011 IEEE International Conference on Robotics and Automation. Shanghai: IEEE, 2011: 3607-3013.
[1] 陈楠,寿黎但,陈刚,陈珂,胡天磊. 面向动态环境的移动对象自适应索引方法[J]. J4, 2013, 47(3): 442-448.
[2] 陈家乾,柳玉甜,何衍,蒋静坪. 基于栅格模型和样本集合的动态环境地图创建[J]. J4, 2011, 45(5): 794-798.
[3] 江万里 熊蓉 褚健. 复杂动态环境下基于侧滑力的局部路径规划[J]. J4, 2007, 41(10): 1609-1614.