Please wait a minute...
浙江大学学报(工学版)  2020, Vol. 54 Issue (7): 1369-1379    DOI: 10.3785/j.issn.1008-973X.2020.07.016
交通工程、水利工程、土木工程     
激光雷达与路侧摄像头的双层融合协同定位
黄文锦1,2,3(),黄妙华1,2,3,*()
1. 武汉理工大学 现代汽车零部件技术湖北省重点实验室,湖北 武汉 430070
2. 武汉理工大学 汽车零部件技术湖北省协同创新中心,湖北 武汉 430070
3. 武汉理工大学 湖北省新能源与智能网联车工程技术研究中心,湖北 武汉 430070
Double-layer fusion of lidar and roadside camera for cooperative localization
Wen-jin HUANG1,2,3(),Miao-hua HUANG1,2,3,*()
1. Hubei Key Laboratory of Advanced Technology for Automotive Components, Wuhan University of Technology, Wuhan 430070, China
2. Hubei Collaborative Innovation Center for Automotive Components Technology, Wuhan University of Technology, Wuhan 430070, China
3. Hubei Research Center for New Energy and Intelligent Connected Vehicle, Wuhan University of Technology, Wuhan 430070, China
 全文: PDF(2182 KB)   HTML
摘要:

针对非结构化场景中无人驾驶车辆定位误差大的问题,结合车载激光雷达和路侧双目摄像头,采用双层融合协同定位算法实现高精度定位. 下层包含2个并行位姿估计,基于双地图的自适应蒙特卡洛定位,根据位姿偏差的短期和长期估计实现双地图切换,修正激光雷达扫描匹配的累积误差;基于概率数据关联的卡尔曼滤波位姿估计,消除非检测目标对路侧摄像头的干扰,实现目标跟踪. 上层作为全局融合估计,融合下层的2个位姿估计,利用反馈实现自主调节. 实车实验表明,双层融合协同定位的定位精度为0.199 m,航向角精度为2.179°,相比车载激光雷达定位和无反馈的紧融合定位有大幅提升;随着路侧摄像头数量的增加,定位精度可以达到7.8 cm.

关键词: 车路协同激光雷达路侧摄像头位姿估计双层融合协同定位    
Abstract:

Double-layer fusion for cooperative localization was used combined with lidar in car and roadside binocular camera in order to achieve high-precision localization aiming at the problem of large localization error of unmanned vehicles in unstructured scenes. The down layer was two parallel pose estimations. Switching dual map was achieved through short-term and long-term estimation of pose error based on the adaptive Monte Carlo localization, and the cumulative error of scan matching for lidar was corrected. Kalman filter based on probability data association was used to eliminate non-detected targets’ interference for roadside cameras, and tracking was achieved. The upper layer fused pose estimations of two down layer as a global fusion estimation, and the result feedback was used to achieve autoregulation. The vehicle experiments showed that the localization accuracy of double-layer fusion cooperative localization was 0.199 m, and the yaw angle accuracy was 2.179°. It was greatly improved compared with localization by lidar on car or tight fusion without feedback. The localization accuracy can reach 7.8 cm as the number of roadside cameras increases.

Key words: vehicle-road cooperation    lidar    roadside camera    pose estimation    double-layer fusion    cooperative localization
收稿日期: 2020-02-16 出版日期: 2020-07-05
CLC:  U 495  
基金资助: 政府间国际科技创新合作重点专项资助项目(SQ2018YFGH000405);中央高校基本科研业务费资助项目(205207002)
通讯作者: 黄妙华     E-mail: 15172513782@163.com;1669894112@qq.com
作者简介: 黄文锦(1995—),男,硕士生,从事智能车辆感知、定位的研究. orcid.org/0000-0001-9119-2412. E-mail: 15172513782@163.com
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
作者相关文章  
黄文锦
黄妙华

引用本文:

黄文锦,黄妙华. 激光雷达与路侧摄像头的双层融合协同定位[J]. 浙江大学学报(工学版), 2020, 54(7): 1369-1379.

Wen-jin HUANG,Miao-hua HUANG. Double-layer fusion of lidar and roadside camera for cooperative localization. Journal of ZheJiang University (Engineering Science), 2020, 54(7): 1369-1379.

链接本文:

http://www.zjujournals.com/eng/CN/10.3785/j.issn.1008-973X.2020.07.016        http://www.zjujournals.com/eng/CN/Y2020/V54/I7/1369

图 1  协同定位系统框图
图 2  坐标变换关系
图 3  基于双地图的AMCL位姿估计
图 4  无人驾驶平台308S
图 5  武汉理工大学校园定位场景
图 6  4种算法的轨迹对比图
定位算法 $\Delta d$/m
AMCL 1.402
EKF 1.146
双层融合 0.755
表 1  3种算法终点处的累积纵向误差
图 7  3种算法的定位误差
定位算法 ${\mu _{\rm{L}}}$/m ${\sigma _{\rm{L}}}$/m $\Delta {L_{\max }}$/m ${\mu _{\rm{\theta}} }$/(°) ${\sigma _{\rm{\theta}} }$/(°) $\Delta {\theta _{\max }}$/(°)
AMCL 0.956 1.567 6.273 4.877 8.149 38.809
EKF 0.325 0.435 1.483 3.260 5.508 38.478
双层融合 0.199 0.276 1.272 2.179 3.085 14.759
表 2  3种算法的定位误差统计学指标
图 8  3种算法的定位误差分布
类型 数量 位置
1cam 1
2cams 2
3cams 3
表 3  摄像头的数量和位置
图 9  3种类型的轨迹对比
图 10  3种类型的定位误差
图 11  3种类型在各区域的横向定位误差标准差
类型 ${\mu _{\rm{L}}}$/m ${\sigma _{\rm{L}}}$/m $\Delta {L_{\max }}$/m ${\mu _{\rm{\theta}} }$/(°) ${\sigma _{\rm{\theta}} }$/(°) $\Delta {\theta _{\max }}$/(°)
1cam 0.199 0.276 1.272 2.179 3.120 14.759
2cams 0.166 0.272 1.245 2.113 3.085 13.291
3cams 0.078 0.143 0.717 1.848 2.821 11.778
表 4  3种类型的定位误差统计学指标
图 12  3种类型的定位误差占比
1 KIM Y, AN J, LEE J Robust navigational system for a transporter using GPS/INS fusion[J]. IEEE Transactions on Industrial Electronics, 2018, 65 (4): 3346- 3354
doi: 10.1109/TIE.2017.2752137
2 LI T, ZHANG H, GAO Z, et al Tight fusion of a monocular camera, MEMS-IMU, and single-frequency multi-GNSS RTK for precise navigation in GNSS-challenged environments[J]. Remote Sensing, 2019, 11 (6): 610- 634
doi: 10.3390/rs11060610
3 GAO Y, LIU S, ATIA M, et al INS/GPS/LiDAR integrated navigation system for urban and indoor environments using hybrid scan matching algorithm[J]. Sensors, 2015, 15 (9): 23286- 23302
doi: 10.3390/s150923286
4 WAN G, YANG X, CAI R, et al. Robust and precise vehicle localization based on multi-sensor fusion in diverse city scenes [C] // 2018 IEEE International Conference on Robotics and Automation. Brisbane: IEEE, 2018: 4670-4677.
5 YU B, DONG L, XUE D, et al A hybrid dead reckoning error correction scheme based on extended Kalman filter and map matching for vehicle self-localization[J]. Journal of Intelligent Transportation Systems, 2019, 23 (1): 84- 98
doi: 10.1080/15472450.2018.1527693
6 HE M, ZHENG L, CAO W, et al An enhanced weight-based real-time map matching algorithm for complex urban networks[J]. Physica A: Statistical Mechanics and its Applications, 2019, 534 (23): 122318- 122330
7 罗文慧, 董宝田, 王泽胜 基于车路协同的车辆定位算法研究[J]. 西南交通大学学报, 2018, 53 (5): 1072- 1077
LUO Wen-hui, DONG Bao-tian, WANG Ze-sheng Algorithm based on cooperative vehicle infrastructure systems[J]. Journal of Southwest Jiaotong University, 2018, 53 (5): 1072- 1077
doi: 10.3969/j.issn.0258-2724.2018.05.026
8 ZARZA H, YOUSEFI S, BENSLIMANE A RIALS: RSU/INS-aided localization system for GPS-challenged road segments[J]. Wireless Communications and Mobile Computing, 2016, 16 (10): 1290- 1305
doi: 10.1002/wcm.2604
9 QIN H, PENG Y, ZHANG W Vehicles on RFID: error-cognitive vehicle localization in GPS-Less environments[J]. IEEE Transactions on Vehicular Technology, 2017, 66 (11): 9943- 9957
doi: 10.1109/TVT.2017.2739123
10 MOUSAVIAN A, ANGUELOV D, FLYNN J, et al. 3D bounding box estimation using deep learning and geometry [C] // The IEEE Conference on Computer Vision and Pattern Recognition. Hawaii: IEEE, 2017: 7074-7082.
11 DENG Z, LATECKI L J. Amodal detection of 3D objects: inferring 3D bounding boxes from 2D ones in RGB-depth images [C] // The IEEE Conference on Computer Vision and Pattern Recognition. Hawaii: IEEE, 2017: 5762-5770.
12 LI P, CHEN X, SHEN S. Stereo R-CNN based 3D object detection for autonomous driving [C] // The IEEE Conference on Computer Vision and Pattern Recognition. California: IEEE, 2019: 7644-7652.
13 WANG J, GAO Y, LI Z, et al A tightly-coupled GPS/INS/UWB cooperative positioning sensors system supported by V2I communication[J]. Sensors, 2016, 16 (7): 944- 959
doi: 10.3390/s16070944
14 FASCISTA A, CICCARESE G, COLUCCIA A, et al Angle of arrival-based cooperative positioning for smart vehicles[J]. IEEE Transactions on Intelligent Transportation Systems, 2018, 19 (9): 2880- 2892
doi: 10.1109/TITS.2017.2769488
15 YU H, LI Z, WANG J, et al Data fusion for a GPS/INS tightly coupled positioning system with equality and inequality constraints using an aggregate constraint unscented Kalman filter[J]. Journal of Spatial Science, 2018, 15 (44): 937- 949
16 HOANG G M, DENIS B, H?RRI J, et al Bayesian fusion of GNSS, ITS-G5 and IR–UWB data for robust cooperative vehicular localization[J]. Comptes Rendus Physique, 2019, 20 (3): 218- 227
doi: 10.1016/j.crhy.2019.03.004
17 ZHAO X, MU K, HUI F, et al A cooperative vehicle-infrastructure based urban driving environment perception method using a D-S theory-based credibility map[J]. Optik, 2017, 138 (11): 407- 415
18 刘江, 蔡伯根, 王云鹏 基于GNSS/DSRC融合的协同车辆定位方法[J]. 交通运输工程学报, 2014, 4 (14): 116- 126
LIU Jiang, CAI Bai-gen, WANG Yun-peng Cooperative vehicle positioning method based on GNSS/DSRC fusion[J]. Journal of Traffic and Transportation Engineering, 2014, 4 (14): 116- 126
19 夏楠, 邱天爽, 李景春, 等 一种卡尔曼滤波与粒子滤波相结合的非线性滤波算法[J]. 电子学报, 2013, 41 (1): 148- 152
XIA Nan, QIU Tian-shuang, LI Jing-chun, et al A nonlinear filtering algorithm combining the Kalman filter and the particle filter[J]. Acta Electronica Sinica, 2013, 41 (1): 148- 152
doi: 10.3969/j.issn.0372-2112.2013.01.026
20 CENSI A. An ICP variant using a point-to-line metric [C] // 2008 IEEE International Conference on Robotics and Automation. Pasadena: IEEE, 2008: 19-25.
21 HESS W, KOHLER D, RAPP H, et al. Real-time loop closure in 2D LIDAR SLAM [C] // 2016 IEEE International Conference on Robotics and Automation. Stockholm: IEEE, 2016: 1271-1278.
22 GUAN R P, RISTIC B, WANG L, et al KLD sampling with Gmapping proposal for Monte Carlo localization of mobile robots[J]. Information Fusion, 2019, 49 (9): 79- 88
23 GRISETTI G, STACHNISS C, BURGARD W Improved techniques for grid mapping with Rao-Blackwellized particle filters[J]. IEEE Transactions on Robotics, 2007, 23 (1): 34- 46
doi: 10.1109/TRO.2006.889486
24 张辉, 庄文盛, 杨永强, 等 车路协同系统中的车辆精确定位方法研究[J]. 公路交通科技, 2017, 34 (5): 137- 143
ZHANG Hui, ZHUANG Wen-sheng, YANG Yong-qiang, et al Study on vehicle accurate position method in cooperative vehicle infrastructure system[J]. Journal of Highway and Transportation Research and Development, 2017, 34 (5): 137- 143
25 XU S, SAVVARIS A, HE S, et al. Real-time implementation of YOLO+JPDA for small scale UAV multiple object tracking [C] // 2018 International Conference on Unmanned Aircraft Systems. Dallas: IEEE, 2018: 1336-1341.
26 XIAO X, SHI C, YANG Y, et al. An adaptive INS/GPS/VPS federal Kalman filter for UAV based on SVM [C] // 2017 13th IEEE Conference on Automation Science and Engineering. Xi'an: IEEE, 2017: 1651-1656.
27 HU G, GAO S, ZHONG Y, et al Modified federated Kalman filter for INS/GNSS/CNS integration[J]. Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering, 2015, 230 (1): 30- 44
28 XING Z, XIA Y Distributed federated Kalman filter fusion over multi-sensor unreliable networked systems[J]. IEEE Transactions on Circuits and Systems I: Regular Papers, 2016, 63 (10): 1714- 1725
doi: 10.1109/TCSI.2016.2587728
29 SUN S, LIN H, MA J, et al Multi-sensor distributed fusion estimation with applications in networked systems: a review paper[J]. Information Fusion, 2017, 38 (6): 122- 134
[1] 孙朋朋, 赵祥模, 徐志刚, 闵海根. 基于3D激光雷达城市道路边界鲁棒检测算法[J]. 浙江大学学报(工学版), 2018, 52(3): 504-514.
[2] 张振杰, 李建胜, 赵漫丹, 张小东. 基于三视图几何约束的摄像机相对位姿估计[J]. 浙江大学学报(工学版), 2018, 52(1): 151-159.
[3] 朱株,刘济林. 基于马尔科夫随机场的三维激光雷达路面实时分割[J]. 浙江大学学报(工学版), 2015, 49(3): 464-469.
[4] 杨力, 刘俊毅, 王延长, 刘济林. 基于全景相机和全向激光雷达的致密三维重建[J]. 浙江大学学报(工学版), 2014, 48(8): 1481-1487.
[5] 程健, 项志宇, 于海滨, 刘济林. 城市复杂环境下基于三维激光雷达实时车辆检测[J]. 浙江大学学报(工学版), 2014, 48(12): 2101-2106.
[6] 杨飞,朱株,龚小谨,刘济林. 基于三维激光雷达的动态障碍实时检测与跟踪[J]. J4, 2012, 46(9): 1565-1571.
[7] 项志宇, 郑路. 摄像机与3D激光雷达联合标定的新方法[J]. J4, 2009, 43(8): 1401-1405.
[8] 孙 宇 项志宇 刘济林. 未知室外环境下移动机器人的三维场景重建[J]. J4, 2007, 41(12): 1949-1954.
[9] 于春和 陈国斌 刘济林. 基于多线激光雷达的障碍检测与环境重建[J]. J4, 2006, 40(6): 1066-1070.
[10] 项志宇. 快速三维扫描激光雷达的设计及其系统标定[J]. J4, 2006, 40(12): 2130-2133.