|
|
Fast establishing method for mapping relationship between 3D scanner point cloud and panoramic image |
Xu ZHANG1,2(),Qingzhou MAO1,2,*(),Chunlin SHI3,Yixuan SHI1 |
1. School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China 2. Hubei Luojia Laboratory, Wuhan 430079, China 3. Troops 61206, Beijing 100042, China |
|
|
Abstract A method was proposed to directly establish the mapping relationship between point clouds and panoramic images in response to the complex calibration process of sensor external parameters when obtaining color point clouds using the ground 3D scanner. Firstly, an improved Zernike moment sub-pixel edge extraction algorithm based on one-dimensional maximum entropy was proposed to locate the target sphere from the panoramic image, and the target sphere was extracted from the point cloud according to the 3D geometric characteristics. Then, the extraction result was used as the registration primitive, and the primitive triangle was constructed in the spatial spherical coordinate. The primitive pairing was completed by the minimum angular distance difference method, and the initial mapping relationship between the point cloud and the panoramic image was established. Finally, aiming at the mapping deviation caused by local image distortion, a hybrid algorithm based on improved Levenberg-Marquardt and free-form deformation combination was proposed to optimize the mapping relationship between data pixel by pixel. The feasibility of the proposed method is verified by the experimental data of multiple scenarios. The results show that the extraction rate of the target from the point cloud and the image is high, and the target recognized by the point cloud and the image is successfully paired by the minimum angle difference method. Compared with the traditional Zernike moment extraction target, the initial mapping error was reduced by 61.1% for the improved Zernike moment. After the optimization of the hybrid algorithm, the mapping error between the point cloud and the panoramic image was about 1 pixel, and the data mapping result was stable and not affected by the position of the station and the point cloud density.
|
Received: 11 November 2023
Published: 01 July 2024
|
|
Fund: 国家重点研发计划资助项目(2023YFC3009400, 2023YFB2603702). |
Corresponding Authors:
Qingzhou MAO
E-mail: zhangxuwhu97@whu.edu.cn;qzhmao@whu.edu.cn
|
面向三维扫描仪点云与全景图像映射关系的快速建立方法
针对地面三维扫描仪获取彩色点云时传感器外参数标定过程复杂的问题,提出直接建立点云与全景图像映射关系的方法. 提出基于一维最大熵的改进Zernike矩亚像素边缘提取算法,自全景图像中定位靶球,根据三维几何特点从点云中提取靶球. 将提取结果作为配准基元,在空间球坐标中构建基元三角形,通过最小角距差法完成基元配对,建立点云与全景图像的初始映射关系. 针对图像局部畸变导致的映射偏差,提出基于改进Levenberg-Marquardt算法和自由形变法组合的混合算法逐像素优化数据间的映射关系. 利用多种场景的实验数据验证所提方法的可行性. 结果表明,标靶自点云和图像中的提取率高,被点云和图像同时识别的标靶利用最小角距差法均能够成功配对. 改进Zernike矩相较于传统Zernike矩提取的标靶初始映射误差降低了61.1%;经混合算法优化后,点云与全景图像的映射误差约为1 pixel,数据映射结果稳定且不受测站位置和点云密度的影响.
关键词:
三维扫描仪,
全景图像,
一维最大熵,
映射关系,
配准误差
|
|
[1] |
李小路, 周依尔, 毕腾飞, 等 轻量型感知激光雷达关键技术发展综述[J]. 中国激光, 2022, 49 (19): 1910002 LI Xiaolu, ZHOU Yier, BI Tengfei, et al. Review on key technologies of lightweight type-aware LiDAR[J]. Chinese Journal of Lasers, 2022, 49 (19): 1910002
doi: 10.3788/CJL202249.1910002
|
|
|
[2] |
杨必胜, 陈驰, 董震 面向智能化测绘的城市地物三维提取[J]. 测绘学报, 2022, 51 (7): 1476- 1484 YANG Bisheng, CHEN Chi, DONG Zhen 3D geospatial information extraction of urban objects for smart surveying and mapping[J]. Acta Geodaetica et Cartographica Sinica, 2022, 51 (7): 1476- 1484
doi: 10.11947/j.issn.1001-1595.2022.7.chxb202207030
|
|
|
[3] |
张靖, 江万寿 激光点云与光学影像配准: 现状与趋势[J]. 地球信息科学学报, 2017, 19 (4): 528- 539 ZHANG Jing, JIANG Wanshou Registration between laser scanning point cloud and optical images: status and trends[J]. Journal of Geo-Information Science, 2017, 19 (4): 528- 539
|
|
|
[4] |
杨必胜, 梁福逊, 黄荣刚 三维激光扫描点云数据处理研究进展、挑战与趋势[J]. 测绘学报, 2017, 46 (10): 1509- 1516 YANG Bisheng, LIANG Fuxun, HUANG Ronggang Progress, challenges and perspectives of 3D LiDAR point cloud processing[J]. Acta Geodaetica et Cartographica Sinica, 2017, 46 (10): 1509- 1516
|
|
|
[5] |
ZHANG Q, PLESS R. Extrinsic calibration of a camera and laser range finder (improves camera calibration) [C]// IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) . Sendai: IEEE, 2004: 2301–2306.
|
|
|
[6] |
GEIGER A, MOOSMANN F, CAR Ö, et al. Automatic camera and range sensor calibration using a single shot [C]// IEEE International Conference on Robotics and Automation . Saint: IEEE, 2012: 3936–3943.
|
|
|
[7] |
董方新, 蔡军, 解杨敏 立体视觉和三维激光系统的联合标定方法[J]. 仪器仪表学报, 2017, 38 (10): 2589- 2596 DONG Fangxin CAI Jun, XIE Yangmin Joint calibration method for stereo vision system and 3D laser system[J]. Chinese Journal of Scientific Instrument, 2017, 38 (10): 2589- 2596
|
|
|
[8] |
俞德崎, 李广云, 王力, 等 一种基于三维特征点集的激光雷达与相机配准方法[J]. 测绘通报, 2018, (11): 40- 45 YU Deqi, LI Guangyun, WANG Li, et al Calibration of LiDAR and camera based on 3D feature point sets[J]. Bulletin of Surveying and Mapping, 2018, (11): 40- 45
|
|
|
[9] |
BELTRÁN J, GUINDEL C, DE LA ESCALERA A, et al Automatic extrinsic calibration method for LiDAR and camera sensor setups[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23 (10): 17677- 17689
doi: 10.1109/TITS.2022.3155228
|
|
|
[10] |
DONG W, ISLER V A novel method for the extrinsic calibration of a 2D laser rangefinder and a camera[J]. IEEE Sensors Journal, 2018, 18 (10): 4200- 4211
doi: 10.1109/JSEN.2018.2819082
|
|
|
[11] |
FAN S, YU Y, XU M, et al High-precision external parameter calibration method for camera and lidar based on a calibration device[J]. IEEE Access, 2023, 11: 18750- 18760
doi: 10.1109/ACCESS.2023.3247195
|
|
|
[12] |
祝飞, 范佳, 黄玉春, 等 融合多种棋盘格约束的面阵相机和线激光外参标定[J]. 武汉大学学报: 信息科学版, 2019, 44 (10): 1524- 1529 ZHU Fei, FAN Jia, HUANG Yuchun, et al Extrinsic calibration of camera and 2D laser-rangefinder with various chessboard constrains[J]. Geomatics and Information Science of Wuhan University, 2019, 44 (10): 1524- 1529
|
|
|
[13] |
GONG X, LIN Y, LIU J 3D LIDAR-camera extrinsic calibration using an arbitrary trihedron[J]. Sensors, 2013, 13 (2): 1902- 1918
doi: 10.3390/s130201902
|
|
|
[14] |
TEKLA T, ZOLTÁN P, LEVENTE H. Automatic LiDAR-camera calibration of extrinsic parameters using a spherical target [C]// IEEE International Conference on Robotics and Automation (ICRA) . Paris: IEEE, 2020: 8580–8586.
|
|
|
[15] |
BAI Z, JIANG G, XU A LiDAR-camera calibration using line correspondences[J]. Sensors, 2020, 20 (21): 6319
doi: 10.3390/s20216319
|
|
|
[16] |
PANDEY G, MCBRIDE J R, SAVARESE S, et al Automatic extrinsic calibration of vision and lidar by maximizing mutual[J]. Journal of Field Robotics, 2015, 32 (5): 696- 722
|
|
|
[17] |
范光宇, 宫宇宸, 饶蕾, 等 基于灰度相似性的激光点云与全景影像配准[J]. 浙江大学学报: 工学版, 2022, 56 (8): 1633- 1639 FAN Guangyu, GONG Yuchen, RAO Lei, et al Registration of laser point cloud and panoramic image based on gray similarity[J]. Journal of Zhejiang University: Engineering Science, 2022, 56 (8): 1633- 1639
|
|
|
[18] |
陈驰, 杨必胜, 田茂, 等 车载MMS激光点云与序列全景影像自动配准方法[J]. 测绘学报, 2018, 47 (2): 215- 224 CHEN Chi, YANG Bisheng, TIAN Mao, et al Automatic registration of vehicle-borne mobile mapping laser point cloud and sequent panoramas[J]. Acta Geodaetica et Cartographica Sinica, 2018, 47 (2): 215- 224
|
|
|
[19] |
朱宁宁. 车载LiDAR点云与全景图像序列的配准方法[D]. 武汉: 武汉大学, 2019. ZHU Ningning. Registration of MMS LiDAR points and panoramic image sequence [D]. Wuhan: Wuhan University, 2019.
|
|
|
[20] |
SCHNEIDER N, PIEWAK F, STILLER C, et al. RegNet: multimodal sensor registration using deep neural networks [C]// IEEE Intelligent Vehicles Symposium . Los Angeles: IEEE, 2017: 1803–1810.
|
|
|
[21] |
YUAN K, GUO Z, WANG Z J RGGNet: tolerance aware LiDAR-camera online calibration with geometric deep learning and generative model[J]. IEEE Robotics and Automation Letters, 2020, 5 (4): 6956- 6963
doi: 10.1109/LRA.2020.3026958
|
|
|
[22] |
王世强, 孟召宗, 高楠, 等 激光雷达与相机融合标定技术研究进展[J]. 红外与激光工程, 2023, 52 (8): 20230427 WANG Shiqiang, MENG Zhaozong, GAO Nan, et al Advancements in fusion calibration technology of lidar and camera[J]. Infrared and Laser Engineering, 2023, 52 (8): 20230427
doi: 10.3788/IRLA20230427
|
|
|
[23] |
张浩, 许四祥, 董晨晨, 等 融入一维概率Hough变换与局部Zernike矩的双目视觉测量[J]. 光学精密工程, 2023, 31 (12): 1793- 1803 ZHANG Hao, XU Sixiang, DONG Chenchen, et al Binocular vision measurement method incorporating one-dimensional probabilistic Hough transform and local Zernike moment[J]. Optics and Precision Engineering, 2023, 31 (12): 1793- 1803
doi: 10.37188/OPE.20233112.1793
|
|
|
[24] |
SHI C, ZHANG C, DU L, et al Automatic astronomical survey method based on video measurement robot[J]. Journal of Surveying Engineering, 2020, 146 (2): 04020002
doi: 10.1061/(ASCE)SU.1943-5428.0000300
|
|
|
[25] |
TEAGUE M R Image-analysis via the general-theory of moments[J]. Journal of the Optical Society of America, 1980, 70 (8): 920–930
|
|
|
[26] |
GENDRIN C, MARKELJ P, PAWIRO S A, et al Validation for 2D/3D registration II: the comparison of intensity- and gradient-based merit functions using a new gold standard data set[J]. Medical Physics, 2011, 38 (3): 1491- 1502
doi: 10.1118/1.3553403
|
|
|
[27] |
YAMASHITA N, FUKUSHIMA M On the rate of convergence of the Levenberg-Marquardt method[J]. Computing, 2001, 15 (Suppl.): 239- 249
|
|
|
[28] |
FAN J A modified Levenberg-Marquardt algorithm for singular system of nonlinear equations[J]. Journal of Computational Mathematics, 2003, 21 (5): 625- 636
|
|
|
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
|
Shared |
|
|
|
|
|
Discussed |
|
|
|
|