Please wait a minute...
Journal of ZheJiang University (Engineering Science)  2020, Vol. 54 Issue (7): 1369-1379    DOI: 10.3785/j.issn.1008-973X.2020.07.016
Double-layer fusion of lidar and roadside camera for cooperative localization
Wen-jin HUANG1,2,3(),Miao-hua HUANG1,2,3,*()
1. Hubei Key Laboratory of Advanced Technology for Automotive Components, Wuhan University of Technology, Wuhan 430070, China
2. Hubei Collaborative Innovation Center for Automotive Components Technology, Wuhan University of Technology, Wuhan 430070, China
3. Hubei Research Center for New Energy and Intelligent Connected Vehicle, Wuhan University of Technology, Wuhan 430070, China
Download: HTML     PDF(2182KB) HTML
Export: BibTeX | EndNote (RIS)      


Double-layer fusion for cooperative localization was used combined with lidar in car and roadside binocular camera in order to achieve high-precision localization aiming at the problem of large localization error of unmanned vehicles in unstructured scenes. The down layer was two parallel pose estimations. Switching dual map was achieved through short-term and long-term estimation of pose error based on the adaptive Monte Carlo localization, and the cumulative error of scan matching for lidar was corrected. Kalman filter based on probability data association was used to eliminate non-detected targets’ interference for roadside cameras, and tracking was achieved. The upper layer fused pose estimations of two down layer as a global fusion estimation, and the result feedback was used to achieve autoregulation. The vehicle experiments showed that the localization accuracy of double-layer fusion cooperative localization was 0.199 m, and the yaw angle accuracy was 2.179°. It was greatly improved compared with localization by lidar on car or tight fusion without feedback. The localization accuracy can reach 7.8 cm as the number of roadside cameras increases.

Key wordsvehicle-road cooperation      lidar      roadside camera      pose estimation      double-layer fusion      cooperative localization     
Received: 16 February 2020      Published: 05 July 2020
CLC:  U 495  
Corresponding Authors: Miao-hua HUANG     E-mail:;
Cite this article:

Wen-jin HUANG,Miao-hua HUANG. Double-layer fusion of lidar and roadside camera for cooperative localization. Journal of ZheJiang University (Engineering Science), 2020, 54(7): 1369-1379.

URL:     OR


针对非结构化场景中无人驾驶车辆定位误差大的问题,结合车载激光雷达和路侧双目摄像头,采用双层融合协同定位算法实现高精度定位. 下层包含2个并行位姿估计,基于双地图的自适应蒙特卡洛定位,根据位姿偏差的短期和长期估计实现双地图切换,修正激光雷达扫描匹配的累积误差;基于概率数据关联的卡尔曼滤波位姿估计,消除非检测目标对路侧摄像头的干扰,实现目标跟踪. 上层作为全局融合估计,融合下层的2个位姿估计,利用反馈实现自主调节. 实车实验表明,双层融合协同定位的定位精度为0.199 m,航向角精度为2.179°,相比车载激光雷达定位和无反馈的紧融合定位有大幅提升;随着路侧摄像头数量的增加,定位精度可以达到7.8 cm.

关键词: 车路协同,  激光雷达,  路侧摄像头,  位姿估计,  双层融合,  协同定位 
Fig.1 Cooperative localization system block diagram
Fig.2 Relationship of coordinate transformation
Fig.3 AMCL pose estimation based double-map
Fig.4 Driverless vehicle named 308S
Fig.5 Localization scene in campus of WHUT
Fig.6 Trajectory comparison for four algorithms
定位算法 $\Delta d$/m
AMCL 1.402
EKF 1.146
双层融合 0.755
Tab.1 Cumulative longitudinal error for three algorithms in end
Fig.7 Localization error for three algorithms
定位算法 ${\mu _{\rm{L}}}$/m ${\sigma _{\rm{L}}}$/m $\Delta {L_{\max }}$/m ${\mu _{\rm{\theta}} }$/(°) ${\sigma _{\rm{\theta}} }$/(°) $\Delta {\theta _{\max }}$/(°)
AMCL 0.956 1.567 6.273 4.877 8.149 38.809
EKF 0.325 0.435 1.483 3.260 5.508 38.478
双层融合 0.199 0.276 1.272 2.179 3.085 14.759
Tab.2 Statistical indicators of localization error for three algorithms
Fig.8 Distribution of localization error for 3 algorithms
类型 数量 位置
1cam 1
2cams 2
3cams 3
Tab.3 Number and location of cameras
Fig.9 Trajectory comparison for 3 kinds
Fig.10 Localization error for 3 kinds
Fig.11 Standard deviation of lateral localization error for 3 kinds in regions
类型 ${\mu _{\rm{L}}}$/m ${\sigma _{\rm{L}}}$/m $\Delta {L_{\max }}$/m ${\mu _{\rm{\theta}} }$/(°) ${\sigma _{\rm{\theta}} }$/(°) $\Delta {\theta _{\max }}$/(°)
1cam 0.199 0.276 1.272 2.179 3.120 14.759
2cams 0.166 0.272 1.245 2.113 3.085 13.291
3cams 0.078 0.143 0.717 1.848 2.821 11.778
Tab.4 Statistical indicators of localization error for 3 kinds
Fig.12 Proportion of localization error for 3 kinds
[1]   KIM Y, AN J, LEE J Robust navigational system for a transporter using GPS/INS fusion[J]. IEEE Transactions on Industrial Electronics, 2018, 65 (4): 3346- 3354
doi: 10.1109/TIE.2017.2752137
[2]   LI T, ZHANG H, GAO Z, et al Tight fusion of a monocular camera, MEMS-IMU, and single-frequency multi-GNSS RTK for precise navigation in GNSS-challenged environments[J]. Remote Sensing, 2019, 11 (6): 610- 634
doi: 10.3390/rs11060610
[3]   GAO Y, LIU S, ATIA M, et al INS/GPS/LiDAR integrated navigation system for urban and indoor environments using hybrid scan matching algorithm[J]. Sensors, 2015, 15 (9): 23286- 23302
doi: 10.3390/s150923286
[4]   WAN G, YANG X, CAI R, et al. Robust and precise vehicle localization based on multi-sensor fusion in diverse city scenes [C] // 2018 IEEE International Conference on Robotics and Automation. Brisbane: IEEE, 2018: 4670-4677.
[5]   YU B, DONG L, XUE D, et al A hybrid dead reckoning error correction scheme based on extended Kalman filter and map matching for vehicle self-localization[J]. Journal of Intelligent Transportation Systems, 2019, 23 (1): 84- 98
doi: 10.1080/15472450.2018.1527693
[6]   HE M, ZHENG L, CAO W, et al An enhanced weight-based real-time map matching algorithm for complex urban networks[J]. Physica A: Statistical Mechanics and its Applications, 2019, 534 (23): 122318- 122330
[7]   罗文慧, 董宝田, 王泽胜 基于车路协同的车辆定位算法研究[J]. 西南交通大学学报, 2018, 53 (5): 1072- 1077
LUO Wen-hui, DONG Bao-tian, WANG Ze-sheng Algorithm based on cooperative vehicle infrastructure systems[J]. Journal of Southwest Jiaotong University, 2018, 53 (5): 1072- 1077
doi: 10.3969/j.issn.0258-2724.2018.05.026
[8]   ZARZA H, YOUSEFI S, BENSLIMANE A RIALS: RSU/INS-aided localization system for GPS-challenged road segments[J]. Wireless Communications and Mobile Computing, 2016, 16 (10): 1290- 1305
doi: 10.1002/wcm.2604
[9]   QIN H, PENG Y, ZHANG W Vehicles on RFID: error-cognitive vehicle localization in GPS-Less environments[J]. IEEE Transactions on Vehicular Technology, 2017, 66 (11): 9943- 9957
doi: 10.1109/TVT.2017.2739123
[10]   MOUSAVIAN A, ANGUELOV D, FLYNN J, et al. 3D bounding box estimation using deep learning and geometry [C] // The IEEE Conference on Computer Vision and Pattern Recognition. Hawaii: IEEE, 2017: 7074-7082.
[11]   DENG Z, LATECKI L J. Amodal detection of 3D objects: inferring 3D bounding boxes from 2D ones in RGB-depth images [C] // The IEEE Conference on Computer Vision and Pattern Recognition. Hawaii: IEEE, 2017: 5762-5770.
[12]   LI P, CHEN X, SHEN S. Stereo R-CNN based 3D object detection for autonomous driving [C] // The IEEE Conference on Computer Vision and Pattern Recognition. California: IEEE, 2019: 7644-7652.
[13]   WANG J, GAO Y, LI Z, et al A tightly-coupled GPS/INS/UWB cooperative positioning sensors system supported by V2I communication[J]. Sensors, 2016, 16 (7): 944- 959
doi: 10.3390/s16070944
[14]   FASCISTA A, CICCARESE G, COLUCCIA A, et al Angle of arrival-based cooperative positioning for smart vehicles[J]. IEEE Transactions on Intelligent Transportation Systems, 2018, 19 (9): 2880- 2892
doi: 10.1109/TITS.2017.2769488
[15]   YU H, LI Z, WANG J, et al Data fusion for a GPS/INS tightly coupled positioning system with equality and inequality constraints using an aggregate constraint unscented Kalman filter[J]. Journal of Spatial Science, 2018, 15 (44): 937- 949
[16]   HOANG G M, DENIS B, H?RRI J, et al Bayesian fusion of GNSS, ITS-G5 and IR–UWB data for robust cooperative vehicular localization[J]. Comptes Rendus Physique, 2019, 20 (3): 218- 227
doi: 10.1016/j.crhy.2019.03.004
[17]   ZHAO X, MU K, HUI F, et al A cooperative vehicle-infrastructure based urban driving environment perception method using a D-S theory-based credibility map[J]. Optik, 2017, 138 (11): 407- 415
[18]   刘江, 蔡伯根, 王云鹏 基于GNSS/DSRC融合的协同车辆定位方法[J]. 交通运输工程学报, 2014, 4 (14): 116- 126
LIU Jiang, CAI Bai-gen, WANG Yun-peng Cooperative vehicle positioning method based on GNSS/DSRC fusion[J]. Journal of Traffic and Transportation Engineering, 2014, 4 (14): 116- 126
[19]   夏楠, 邱天爽, 李景春, 等 一种卡尔曼滤波与粒子滤波相结合的非线性滤波算法[J]. 电子学报, 2013, 41 (1): 148- 152
XIA Nan, QIU Tian-shuang, LI Jing-chun, et al A nonlinear filtering algorithm combining the Kalman filter and the particle filter[J]. Acta Electronica Sinica, 2013, 41 (1): 148- 152
doi: 10.3969/j.issn.0372-2112.2013.01.026
[20]   CENSI A. An ICP variant using a point-to-line metric [C] // 2008 IEEE International Conference on Robotics and Automation. Pasadena: IEEE, 2008: 19-25.
[21]   HESS W, KOHLER D, RAPP H, et al. Real-time loop closure in 2D LIDAR SLAM [C] // 2016 IEEE International Conference on Robotics and Automation. Stockholm: IEEE, 2016: 1271-1278.
[22]   GUAN R P, RISTIC B, WANG L, et al KLD sampling with Gmapping proposal for Monte Carlo localization of mobile robots[J]. Information Fusion, 2019, 49 (9): 79- 88
[23]   GRISETTI G, STACHNISS C, BURGARD W Improved techniques for grid mapping with Rao-Blackwellized particle filters[J]. IEEE Transactions on Robotics, 2007, 23 (1): 34- 46
doi: 10.1109/TRO.2006.889486
[24]   张辉, 庄文盛, 杨永强, 等 车路协同系统中的车辆精确定位方法研究[J]. 公路交通科技, 2017, 34 (5): 137- 143
ZHANG Hui, ZHUANG Wen-sheng, YANG Yong-qiang, et al Study on vehicle accurate position method in cooperative vehicle infrastructure system[J]. Journal of Highway and Transportation Research and Development, 2017, 34 (5): 137- 143
[25]   XU S, SAVVARIS A, HE S, et al. Real-time implementation of YOLO+JPDA for small scale UAV multiple object tracking [C] // 2018 International Conference on Unmanned Aircraft Systems. Dallas: IEEE, 2018: 1336-1341.
[26]   XIAO X, SHI C, YANG Y, et al. An adaptive INS/GPS/VPS federal Kalman filter for UAV based on SVM [C] // 2017 13th IEEE Conference on Automation Science and Engineering. Xi'an: IEEE, 2017: 1651-1656.
[27]   HU G, GAO S, ZHONG Y, et al Modified federated Kalman filter for INS/GNSS/CNS integration[J]. Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering, 2015, 230 (1): 30- 44
[28]   XING Z, XIA Y Distributed federated Kalman filter fusion over multi-sensor unreliable networked systems[J]. IEEE Transactions on Circuits and Systems I: Regular Papers, 2016, 63 (10): 1714- 1725
doi: 10.1109/TCSI.2016.2587728
[29]   SUN S, LIN H, MA J, et al Multi-sensor distributed fusion estimation with applications in networked systems: a review paper[J]. Information Fusion, 2017, 38 (6): 122- 134
[1] SUN Peng-peng, ZHAO Xiang-mo, XU Zhi-gang, MIN Hai-gen. Urban curb robust detection algorithm based on 3D-LIDAR[J]. Journal of ZheJiang University (Engineering Science), 2018, 52(3): 504-514.
[2] ZHANG Zhen-jie, LI Jian-sheng, ZHAO Man-dan, ZHANG Xiao-dong. Camera pose estimation based on three-view geometry constraint[J]. Journal of ZheJiang University (Engineering Science), 2018, 52(1): 151-159.
[3] ZHU Zhu, LIU Ji-lin. Real-time Markov random field based ground segmentation of 3D Lidar data[J]. Journal of ZheJiang University (Engineering Science), 2015, 49(3): 464-469.
[4] YANG Fei, ZHU Zhu, GONG Xiao-jin, LIU Ji-lin. Real-time dynamic obstacle detection and tracking using 3D Lidar[J]. Journal of ZheJiang University (Engineering Science), 2012, 46(9): 1565-1571.