Please wait a minute...
浙江大学学报(工学版)  2019, Vol. 53 Issue (9): 1749-1758    DOI: 10.3785/j.issn.1008-973X.2019.09.014
计算机科学与人工智能     
面向深度相机位姿优化的帧间点云数据配准算法
李兴东1,3(),高赫蔚1,孙龙2,3,*()
1. 东北林业大学 机电工程学院,黑龙江 哈尔滨 150040
2. 东北林业大学 林学院,黑龙江 哈尔滨 150040
3. 东北林业大学 北方林火管理国家林草局重点实验室,黑龙江 哈尔滨 150040
Inter frame point clouds registration algorithm for pose optimization of depth camera
Xing-dong LI1,3(),He-wei GAO1,Long SUN2,3,*()
1. College of Mechanical and Electrical Engineering, Northeast Forestry University, Harbin 150040, China
2. College of Forestry, Northeast Forestry University, Harbin 150040, China
3. Northern Forest Fire Management Key Laboratory of the State Forestry and Grassland Bureau, Northeast Forestry University, Harbin 150040, China
 全文: PDF(1788 KB)   HTML
摘要:

TOF相机能够同时采集灰度图像和深度图像从而优化相机位姿的估计值. 应用图结构调整框架优化多帧数据采集时的相机位姿,采用帧间配准决定优化的精度和效率. 从2帧图像上提取并匹配尺度不变特征点对,二维特征点被扩展到三维空间后,利用与特征点的空间位置关系将2帧三维点云配准;逐步应用提出的算法配准参与位姿优化的多帧点云中的任意2帧点云;最后将有效配准的点云帧对作为输入数据,采用图结构算法优化位姿. 实验结果表明,提出的帧间配准算法使得位姿估计值精度显著提高,同时保证了估计效率.

关键词: 三维视觉TOF相机位姿帧间配准图结构    
Abstract:

TOF (time of flight) camera can collect gray and depth images simultaneously to optimize the estimation of camera pose. Graph-based adjustment structure was applied to optimize the poses of the TOF camera in acquiring several frames. Registration between frames is a key operation which determines both the efficiency and effectiveness of the camera pose optimization. Scale invariant features were detected from a pair of images and matched subsequently. After the 2D feature points were extended into 3D space, two point clouds were registered in terms of relative positions between the features and the normal 3D points. Among all of the point clouds participating in the optimization of camera pose, any two point clouds were registered pair by pair using the proposed registering method. Lastly, the graph based algorithm was employed to adjust the camera poses, with inputs of the valid pairs of registered point clouds. Results demonstrated that the proposed method can improve the precision of the optimized camera pose, and the estimating efficiency is guaranteed.

Key words: 3D vision    time of flight (TOF) camera    pose    inter frame registration    graph-based structure
收稿日期: 2018-08-09 出版日期: 2019-09-12
CLC:  TP 242  
通讯作者: 孙龙     E-mail: lixd@nefu.edu.cn;13945016458@126.com
作者简介: 李兴东(1983—),男,讲师. orcid.org/0000-0002-0057-9804. E-mail: lixd@nefu.edu.cn
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
作者相关文章  
李兴东
高赫蔚
孙龙

引用本文:

李兴东,高赫蔚,孙龙. 面向深度相机位姿优化的帧间点云数据配准算法[J]. 浙江大学学报(工学版), 2019, 53(9): 1749-1758.

Xing-dong LI,He-wei GAO,Long SUN. Inter frame point clouds registration algorithm for pose optimization of depth camera. Journal of ZheJiang University (Engineering Science), 2019, 53(9): 1749-1758.

链接本文:

http://www.zjujournals.com/eng/CN/10.3785/j.issn.1008-973X.2019.09.014        http://www.zjujournals.com/eng/CN/Y2019/V53/I9/1749

图 1  二维图像特征及三维点云特征对
图 2  以三维特征点为球心搜索的三维点集合对
k rk /m $\left| {{{P}}_{{a_k}}^i} \right|$ $\left| {{{P}}_{{a_k}}^j} \right|$
1 0.56 2 439 2 551
2 2.36 1 215 1 185
3 0.47 1 284 1 263
4 0.89 2 626 2 465
表 1  搜索半径及从两帧点云中提取的三维点子集
图 3  三维点子集对应的像素集合四象限划分结果
图 4  深度相机位姿优化流程图
图 5  深度相机原始数据采集试验环境
图 6  多次执行优化算法的平均旋转误差比较
图 7  多次执行优化算法的平均平移误差比较
图 8  执行一次图结构优化算法的平均时间
图 9  数据拼接实验多帧原始灰度图像
图 10  采用不同算法的位姿优化后6帧数据三维点云拼接比较
1 LANGE R. 3D time-of-flight distance measurement with custom solid-state image sensors in CMOS/CCD technology [D]. Siegen: University of Siegen, 2000: 14–56.
2 王宇, 朴燕, 孙荣春 结合同场景彩色图像的深度图超分辨率重建[J]. 光学学报, 2017, 37 (8): 102- 108
WANG Yu, PIAO Yan, SUN Rong-chun Depth image super-resolution construction combined with high-resolution color image of the same scene[J]. Acta Optica Sinica, 2017, 37 (8): 102- 108
3 李诗锐, 李琪, 李海洋, 等 基于Kinect v2 的实时精确三维重建系统[J]. 软件学报, 2016, 27 (10): 2519- 2529
LI Shi-rui, LI Qi, LI Hai-yang, et al Real-time accurate 3D reconstruction based on Kinect v2[J]. Journal of Software, 2016, 27 (10): 2519- 2529
4 HENRY P, KRAININ M, HERBST E, et al RGB-D mapping: using Kinect-style depth cameras for dense 3D modeling of indoor environments[J]. International Journal of Robotics Research, 2012, 31 (5): 647- 663
doi: 10.1177/0278364911434148
5 FOIX S, ALENY G, ANDRADE-CETTO J, et al. Object modeling using a TOF camera under an uncertainty reduction approach [C] // Proceedings of IEEE International Conference on Robotics and Automation. Anchorage: IEEE, 2010: 1306-1312.
6 DELLEN B, ALENY G, FOIX S, et al. 3D object reconstruction from swissranger sensor data using a spring-mass model [C] // Proceedings of the Fourth International Conference on Computer Vision Theory and Applications. Lisboa: [s. n.], 2009: 368-372.
7 SARKER A, GEPPERTH A, HANDMANN U, et al. Dynamic hand gesture recognition for mobile systems using deep LSTM [C] // Proceedings of the international Conference on Intelligent Human Computer Interaction. Evry: Springer, 2017: 19-31.
8 GOKTURK S B, TOMASI C. 3D head tracking based on recognition and interpolation usinga time-of-flight depth sensor [C] // Proceedings of the 18th IEEE Conference on Computer Vision and Pattern Recognition. Washington: IEEE, 2004: 211-217.
9 LEONARD J, DURRAN-WHYTE H. Simultaneous map building and localization for an autonomous mobile robot [C] // Proceedings of the IEEE Conference on Intelligent Robots and Systems. Osaka: IEEE, 1991: 1442-1447.
10 BESL P J, MCKAY N D A method for registration of 3D shapes[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1992, 14 (2): 239- 256
doi: 10.1109/34.121791
11 DONG J, PENG Y, YING S, et al LieTrICP: an improvement of trimmed iterative closest point algorithm[J]. Neurocomputing, 2014, 140: 67- 76
doi: 10.1016/j.neucom.2014.03.035
12 MAY S, DROESCHEL D, FUCHS S. Robust 3D-mapping with time of light cameras [C] // Proceedings of the IEEE Conference on Intelligent Robots and Systems. St. Louis: IEEE, 2009: 1673-1678.
13 SMITH R C, CHEESEMAN P On the representation and estimation of spatial uncertainty[J]. International Journal of Robotics Research, 1986, 5 (4): 56- 68
doi: 10.1177/027836498600500404
14 侯冰洁, 杨彦甫, 向前, 等 基于EKF和LKF级联的频偏和相位估计联合方案[J]. 光学学报, 2018, 38 (1): 85- 90
HOU Bing-jie, YANG Yan-pu, XIANG Qian, et al A joint frequency offset and phase estimation scheme based on cascaded EKF and LKF[J]. Acta Optica Sinica, 2018, 38 (1): 85- 90
15 MONTEMERLO M, THRUN S, KOLLER D, et al. FastSLAM: a factored solution to the simultaneous localization and mapping problem [C] // Proceedings of the National Conference on Artificial Intelligence. Alberta: AAAI Press, 2002: 593-598.
16 LU F, MILIOS E Globally consistent range scan alignment for environment mapping[J]. Autonomous Robots, 1997, 4 (4): 333- 349
doi: 10.1023/A:1008854305733
17 BORRMANN D, ELSEBERG J, LINGEMANN K, et al Globally consistent 3D mapping with scan matching[J]. Robotics and Autonomous Systems, 2008, 56 (2): 130- 142
doi: 10.1016/j.robot.2007.07.002
18 舒程珣, 何云涛, 孙庆科 基于卷积神经网络的点云配准方法[J]. 激光与光电子学进展, 2017, (3): 123- 131
SHU Cheng-xun, HE Yun-tao, SUN Qing-ke Point cloud registration based on convolutional neural network[J]. Laser and Opto Electronics Progress, 2017, (3): 123- 131
19 林桂潮, 唐昀超, 邹湘军, 等 融合高斯混合模型和点到面距离的点云配准[J]. 计算机辅助设计与图形学学报, 2018, (4): 642- 650
LIN Gui-chao, TANG Yun-chao, ZOU Xiang-jun, et al Point cloud registration algorithm combined gaussian mixture model and point-to-plane metric[J]. Journal of Computer-Aided Design and Computer Graphics, 2018, (4): 642- 650
20 LOWE D G Distinctive image features from scale invariant key points[J]. International Journal of Computer Vision, 2004, 60 (2): 91- 110
doi: 10.1023/B:VISI.0000029664.99615.94
21 KAEHLER A, BRADSKI G. Learning OpenCV 3: Computer Vision in C++ with the OpenCV Library [M]. Boston: O’ Reilly Media, Inc. 2016: 450-505.
22 HORN B K P Closed-form solution of absolute orientation using unit quaternion[J]. Journal of the Optical Society of America A, 1987, 4 (4): 629- 642
doi: 10.1364/JOSAA.4.000629
23 李兴东, 郭伟, 李满天, 等 一种估计深度相机位姿精度的闭式算法[J]. 机器人, 2014, 36 (2): 194- 202
LI Xing-dong, GUO Wei, LI Man-tian, et al A closed-form solution for estimating the accuracy of depth camera’s relative pose[J]. Robot, 2014, 36 (2): 194- 202
[1] 赵燕伟,张健,周仙明,吴耿育. 基于视觉-磁引导的无人机动态跟踪与精准着陆[J]. 浙江大学学报(工学版), 2021, 55(1): 96-108.
[2] 董大钊,徐冠华,高继良,徐月同,傅建中. 基于机器视觉的机器人装配位姿在线校正算法[J]. 浙江大学学报(工学版), 2021, 55(1): 145-152.
[3] 黄文锦,黄妙华. 激光雷达与路侧摄像头的双层融合协同定位[J]. 浙江大学学报(工学版), 2020, 54(7): 1369-1379.
[4] 黄华,邓文强,李源,郭润兰. 基于空间动力学优化的机床结构件质量匹配设计[J]. 浙江大学学报(工学版), 2020, 54(10): 2009-2017.
[5] 王雯涛,李佳田,吴华静,高鹏,阿晓荟,朱志浩. 摄像机主动位姿协同的人脸正视图像获取方法[J]. 浙江大学学报(工学版), 2020, 54(10): 1936-1944.
[6] 王青, 范胜豪, 程亮, 李江雄, 柯映林. 基于实测数据的翼身交点接头干涉检测方法[J]. 浙江大学学报(工学版), 2018, 52(2): 207-216.
[7] 赵丽科, 郑顺义, 王晓南, 黄霞. 单目序列的刚体目标位姿测量[J]. 浙江大学学报(工学版), 2018, 52(12): 2372-2381.
[8] 张振杰, 李建胜, 赵漫丹, 张小东. 基于三视图几何约束的摄像机相对位姿估计[J]. 浙江大学学报(工学版), 2018, 52(1): 151-159.
[9] 陶国良,左赫,刘昊. 气动肌肉-气缸并联平台结构设计及位姿控制[J]. 浙江大学学报(工学版), 2015, 49(5): 821-828.
[10] 王剑,胡锡幸,郭吉丰. 二自由度超声波电机位姿检测与控制[J]. 浙江大学学报(工学版), 2014, 48(5): 871-876.
[11] 屈稳太, 杨家强, 张明晖. 大部件姿态的快速计算与高精度多轴同步控制[J]. 浙江大学学报(工学版), 2014, 48(12): 2216-2222.
[12] 熊瑞斌,黄浦缙,柯映林. 一种自适应入位固持装置及试验研究[J]. J4, 2010, 44(11): 2100-2107.
[13] 张斌, 姚宝国, 柯映林. 基于鞍点规划理论的机翼水平位姿评估方法[J]. J4, 2009, 43(10): 1761-11765.