Please wait a minute...
浙江大学学报(工学版)  2025, Vol. 59 Issue (6): 1179-1190    DOI: 10.3785/j.issn.1008-973X.2025.06.009
计算机技术     
基于正态分布相似性的双视角点云配准方法
李朝龙(),庞善民*(),王超玉,王翌丰,史鹏程
西安交通大学 软件学院,陕西 西安 710049
Pair-wise point cloud registration method based on normal distribution similarity
Zhaolong LI(),Shanmin PANG*(),Chaoyu WANG,Yifeng WANG,Pengcheng SHI
School of Software Engineering, Xi’an Jiaotong University, Xi’an 710049, China
 全文: PDF(2571 KB)   HTML
摘要:

针对现有“点到点”双视角点云配准算法效率慢、精度低的问题,提出基于正态分布相似性的双视角点云配准方法. 将传统“点到点”配准问题转化为“分布到分布”配准问题,利用K-means聚类算法生成若干正态分布聚簇来拟合原始点云数据,再对这些正态分布聚簇进行配准,从而降低计算开销,提升配准效率;将Kullback-Leibler散度引入最近邻匹配正态分布的相似性评估,从而削弱非重叠数据区域对配准的负面影响,提升配准精度. 使用李代数求解器来获取最终的配准结果. 为了验证所提方法的有效性,选取其他8种双视角点云配准方法进行比对,其中包含多种“点到点”配准方法. 结果表明,本研究所提算法在保持较低计算开销的同时,有效提升了配准的稳定性和精确性. 在2个数据集上进行真实场景实验,证明了本研究所提算法在真实环境配准任务上拥有较好的应用潜力.

关键词: 双视角配准部分重叠配准正态分布变换K-means聚类算法Kullback-Leibler散度李代数求解器    
Abstract:

A pair-wise point cloud registration method based on normal distribution similarity was proposed to solve the problem of slow efficiency and low accuracy of existing “point-to-point” pair-wise point cloud registration algorithms. The traditional “point-to-point” registration problem was transformed to a “distribution-to-distribution” registration problem. The K-means clustering algorithm was used to generate several clusters as normal distributions to fit the original point cloud data, and then these normal distributions were used for registration, in order to reduce the calculation overhead and improve the registration efficiency. Kullback-Leibler divergence was introduced to evaluate the similarity of the nearest neighbor distributions to reduce the negative effect of non-overlapping data regions on registration in order to improve the registration accuracy. The final registration result can be obtained by using Lie algebra solver on the basis of the above steps. Eight other pair-wise point cloud registration methods were selected for comparison, including a lot of traditional “point-to-point” registration methods, in order to verify the effectiveness of the proposed method. The experimental results showed that the proposed algorithm could effectively improve the stability and accuracy of the registration tasks while keeping the computation cost low. In addition, experiments on two datasets in real scenarios further proved that the proposed algorithm had good application potential in real environment registration tasks.

Key words: pair-wise registration    partial overlapping registration    normal distribution transformation    K-means clustering algorithm    Kullback-Leibler divergence    Lie algebra solver
收稿日期: 2024-07-09 出版日期: 2025-05-30
CLC:  TP 391  
基金资助: 国家自然科学基金资助项目(61972312).
通讯作者: 庞善民     E-mail: li159515@stu.xjtu.edu.cn;pangsm@xjtu.edu.cn
作者简介: 李朝龙(1999—),男,硕士生,从事计算机视觉研究. orcid.org/0009-0009-1890-6875. E-mail:li159515@stu.xjtu.edu.cn
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
作者相关文章  
李朝龙
庞善民
王超玉
王翌丰
史鹏程

引用本文:

李朝龙,庞善民,王超玉,王翌丰,史鹏程. 基于正态分布相似性的双视角点云配准方法[J]. 浙江大学学报(工学版), 2025, 59(6): 1179-1190.

Zhaolong LI,Shanmin PANG,Chaoyu WANG,Yifeng WANG,Pengcheng SHI. Pair-wise point cloud registration method based on normal distribution similarity. Journal of ZheJiang University (Engineering Science), 2025, 59(6): 1179-1190.

链接本文:

https://www.zjujournals.com/eng/CN/10.3785/j.issn.1008-973X.2025.06.009        https://www.zjujournals.com/eng/CN/Y2025/V59/I6/1179

图 1  正态参数求解的方法
图 2  基于正态分布相似性的双视角点云配准方法示意图
所属模块具体操作时间复杂度
K-means聚类建立k-d$ 2O(hK \times \lg\; K) $
K-means聚类更新聚簇并计算正态参数$ 2O(hN\times \lg \;K) $
KL散度计算搜索最近邻匹配正态分布$ O(HK\times \lg \;K) $
KL散度计算计算KL散度并生成权重$ O(HK) $
李代数求解器优化新一轮刚性变换$ O(HK) $
表 1  双视角点云配准方法的时间复杂度
所属模块具体操作空间复杂度
K-means聚类建立k-d$ 2O(DK) $
K-means聚类求解正态分布聚簇中心$ 2O(DK) $
K-means聚类求解正态分布协方差矩阵$ 2O({D^2}K) $
KL散度计算计算KL散度并生成权重$ O(K) $
李代数求解器优化新一轮刚性变换$ O(1) $
表 2  双视角点云配准方法的空间复杂度
图 3  正态分布聚簇内平均数据量不同时的配准结果
配准方法BunnyArmadilloBuddhaDragon
RMSESRRMSESRRMSESRRMSESR
Initial13.448719.340516.912617.9725
TrICP2.759 40.780.17281.002.55400.640.94720.93
NDT12.78110.0017.49360.006.32080.5017.65230.00
SpICP5.78420.561.78180.915.11190.214.48010.57
SmICP24.87430.220.25541.000.31011.001.11780.86
AAICP16.21110.110.27741.000.27051.000.92050.93
RTC3.83470.562.58840.731.76570.861.64720.79
MICP5.40270.440.21201.000.249 61.000.95940.93
RSICP27.99100.560.15911.000.25101.000.335 21.00
本研究方法0.42491.000.161 91.000.24721.000.32891.00
表 3  不同方法在4个数据集上的精度结果对比
图 4  不同方法配准结果的颜色渐变示意图
图 5  不同双视角点云配准方法的运行时间对比
配准方法SNR/dBRMSE
BunnyArmadilloBuddhaDragon
TrICP502.7612±0.00170.1727±0.00032.5530±0.00090.9465±0.0004
252.7563±0.00810.1733±0.00142.4407±0.01990.9474±0.0028
NDT5012.7890±0.006222.2961±0.01816.6362±0.497424.0140±0.0787
2512.7977±0.023217.3343±0.32546.8398±0.501817.5718±0.0923
SpICP505.7519±0.00891.6955±0.00315.2335±0.00194.6071±0.0038
255.8516±0.01711.7163±0.00205.1914±0.00254.5882±0.0046
SmICP5020.3221±2.22950.2554±0.00020.3103±0.00031.1177±0.0005
2520.4048±1.62120.2548±0.00090.3108±0.00031.1218±0.0020
AAICP5016.2103±0.00130.2774±0.00020.2706±0.00030.9206±0.0003
2516.2135±0.00510.2778±0.00090.2709±0.00060.9204±0.0010
RTC503.8398±0.00472.5661±0.01341.7657±0.00041.6704±0.0372
253.8460±0.02462.6874±0.19111.7591±0.00271.4611±0.1315
MICP505.3758±0.07350.2111±0.00140.2535±0.00100.9608±0.0035
255.4674±0.06440.2111±0.00220.2553±0.00190.9907±0.0473
RSICP5027.9803±0.02000.1592±0.00020.2510±0.00030.3353±0.0005
2527.7358±0.55710.1583±0.00060.2499±0.00070.3354±0.0012
本研究方法500.4265±0.00780.1611±0.00170.2470±0.00090.3287±0.0012
250.4294±0.01850.1615±0.00150.2478±0.00120.3293±0.0028
表 4  不同高斯噪声(50、25 dB)下的配准结果
数据集InitialP2DNDTD2DNDT本研究方法
RMSERMSEt /sSRRMSEt /sSRRMSEt /sSR
Bunny13.44874.459219.750.3313.75523.990.330.42493.741.00
Armadillo19.34050.645214.580.910.76991.640.820.16192.331.00
Buddha16.91260.429542.881.000.41844.351.000.24729.471.00
Dragon17.97250.827315.991.003.66871.930.710.32893.561.00
表 5  在不同数据集上消融实验的结果
配准方法GazeboSGazeboW
RMSEt /sSRRMSEt /sSR
Initial0.26890.3685
TrICP0.027913.880.840.02042.831.00
NDT0.14892.850.260.07841.970.63
SpICP0.03021242.400.770.019 6402.321.00
SmICP0.070934.420.740.025517.370.97
AAICP0.02995.770.710.02282.291.00
RTC0.02615.660.740.02291.941.00
MICP0.02772.240.710.05751.540.93
RSICP0.021126.040.930.01977.880.97
本研究方法0.02089.480.900.01924.251.00
表 6  在真实场景数据集上的配准结果
1 GOJCIC Z, ZHOU C, WEGNER J D, et al. Learning multiview 3D point cloud registration [C]// IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 1759–1769.
2 LI P, WANG R, WANG Y, et al Evaluation of the ICP algorithm in 3D point cloud registration[J]. IEEE Access, 2020, 8: 68030- 68048
doi: 10.1109/ACCESS.2020.2986470
3 CHEN X, MA H, WAN J, et al. Multi-view 3D object detection network for autonomous driving [C]// IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 6526–6534.
4 ZHENG Y, LI Y, YANG S, et al Global-PBNet: a novel point cloud registration for autonomous driving[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23 (11): 22312- 22319
doi: 10.1109/TITS.2022.3153133
5 范光宇, 宫宇宸, 饶蕾, 等 基于灰度相似性的激光点云与全景影像配准[J]. 浙江大学学报: 工学版, 2022, 56 (8): 1633- 1639
FAN Guangyu, GONG Yuchen, RAO Lei, et al Registration of laser point cloud and panoramic image based on gray similarity[J]. Journal of Zhejiang University: Engineering Science, 2022, 56 (8): 1633- 1639
6 龚靖渝, 楼雨京, 柳奉奇, 等 三维场景点云理解与重建技术[J]. 中国图象图形学报, 2023, 28 (6): 1741- 1766
GONG Jingyu, LOU Yujing, LIU Fengqi, et al Scene point cloud understanding and reconstruction technologies in 3D space[J]. Journal of Image and Graphics, 2023, 28 (6): 1741- 1766
doi: 10.11834/jig.230004
7 BESL P J, MCKAY N D A method for registration of 3-D shapes[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1992, 14 (2): 239- 256
doi: 10.1109/34.121791
8 BIBER P, STRASSER W. The normal distributions transform: a new approach to laser scan matching [C]// IEEE International Conference on Intelligent Robots and Systems. Las Vegas: IEEE, 2003: 2743–2748.
9 CHARLES R Q, HAO S, MO K, et al. PointNet: deep learning on point sets for 3D classification and segmentation [C]// IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 77–85.
10 CHETVERIKOV D, SVIRKO D, STEPANOV D, et al. The trimmed iterative closest point algorithm [C]// International Conference on Pattern Recognition. Quebec City: IEEE, 2002: 545–548.
11 BOUAZIZ S, TAGLIASACCHI A, PAULY M Sparse iterative closest point[J]. Computer Graphics Forum, 2013, 32 (5): 113- 123
doi: 10.1111/cgf.12178
12 PAVLOV A L, OVCHINNIKOV G W, DERBYSHEV D Y, et al. AA-ICP: iterative closest point with Anderson acceleration [C]// IEEE International Conference on Robotics and Automation. Brisbane: IEEE, 2018: 3407–3412.
13 ZHANG J, YAO Y, DENG B Fast and robust iterative closest point[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44 (7): 3450- 3466
14 YANG J, LI H, CAMPBELL D, et al Go-ICP: a globally optimal solution to 3D ICP point-set registration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38 (11): 2241- 2254
doi: 10.1109/TPAMI.2015.2513405
15 SHI X, LIU T, HAN X Improved iterative closest point (ICP) 3D point cloud registration algorithm based on point cloud filtering and adaptive fireworks for coarse registration[J]. International Journal of Remote Sensing, 2020, 41 (8): 3197- 3220
doi: 10.1080/01431161.2019.1701211
16 LI Z, WANG C, MA J, et al Point set registration via rigid transformation consensus[J]. Computers and Electrical Engineering, 2022, 101: 108098
doi: 10.1016/j.compeleceng.2022.108098
17 徐文菲, 金莉, 韩旭, 等 面向缺失点云配准的镜像迭代最近点算法[J]. 西安交通大学学报, 2023, 57 (7): 201- 212,220
XU Wenfei, JIN Li, HAN Xu, et al Mirrored iterative closest point algorithm for missing point cloud registration[J]. Journal of Xi’an Jiaotong University, 2023, 57 (7): 201- 212,220
doi: 10.7652/xjtuxb202307019
18 CHEN Y, MEDIONI G Object modelling by registration of multiple range images[J]. Image and Vision Computing, 1992, 10 (3): 145- 155
doi: 10.1016/0262-8856(92)90066-C
19 RUSINKIEWICZ S, LEVOY M. Efficient variants of the ICP algorithm [C]// 3rd International Conference on 3-D Digital Imaging and Modeling. Quebec City: IEEE, 2001: 145–152.
20 RUSINKIEWICZ S A symmetric objective function for ICP[J]. ACM Transactions on Graphics, 2019, 38 (4): 1- 7
21 LI J, HU Q, ZHANG Y, et al Robust symmetric iterative closest point[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2022, 185: 219- 231
doi: 10.1016/j.isprsjprs.2022.01.019
22 MAGNUSSON M, LILIENTHAL A, DUCKETT T Scan registration for autonomous mining vehicles using 3D-NDT[J]. Journal of Field Robotics, 2007, 24 (10): 803- 827
doi: 10.1002/rob.20204
23 STOYANOV T, MAGNUSSON M, LILIENTHAL A J. Point set registration through minimization of the L2 distance between 3D-NDT models [C]// IEEE International Conference on Robotics and Automation. Saint Paul: IEEE, 2012: 5196–5201.
24 AOKI Y, GOFORTH H, SRIVATSAN R A, et al. PointNetLK: robust and efficient point cloud registration using PointNet [C]// IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 7156–7165.
25 WANG Y, SOLOMON J. Deep closest point: learning representations for point cloud registration [C]// IEEE/CVF International Conference on Computer Vision. Seoul: IEEE, 2019: 3522–3531.
26 WANG Y, SUN Y, LIU Z, et al Dynamic graph CNN for learning on point clouds[J]. ACM Transactions on Graphics, 2019, 38 (5): 1- 12
27 YUAN W, ECKART B, KIM K, et al. DeepGMR: learning latent Gaussian mixture models for registration [C]// European Conference on Computer Vision. Glasgow: Springer, 2020: 733–750.
28 XU H, LIU S, WANG G, et al. OMNet: learning overlapping mask for partial-to-partial point cloud registration [C]// IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021: 3112–3121.
29 MEI G, TANG H, HUANG X, et al. Unsupervised deep probabilistic approach for partial point cloud registration [C]// IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver: IEEE, 2023: 13611–13620.
30 ZHU J, JIANG Z, EVANGELIDIS G D, et al Efficient registration of multi-view point sets by K-means clustering[J]. Information Sciences, 2019, 488: 205- 218
doi: 10.1016/j.ins.2019.03.024
31 ZHU J, MU J, YAN C B, et al 3DMNDT: 3D multi-view registration method based on the normal distributions transform[J]. IEEE Transactions on Automation Science and Engineering, 2024, 21 (1): 488- 501
doi: 10.1109/TASE.2022.3225679
32 Stanford University. The Stanford 3D scanning repository [EB/OL]. (2014-08-19) [2021-10-01]. https://graphics.stanford.edu/data/3Dscanrep/.
[1] 鞠文博,董华军. 基于上下文信息融合与动态采样的主板缺陷检测方法[J]. 浙江大学学报(工学版), 2025, 59(6): 1159-1168.
[2] 吕振鸣,董绍江,夏宗佑,牟小燕,王明权. 基于改进CycleGAN的多失真类型水下图像增强[J]. 浙江大学学报(工学版), 2025, 59(6): 1148-1158.
[3] 肖剑,武亮亮,何昕泽,胡欣. 基于异常检测的图像特征匹配算法[J]. 浙江大学学报(工学版), 2025, 59(6): 1140-1147.
[4] 陈捷丰,姚金良. 基于自相似嵌入和全局特征重排序的图像检索方法[J]. 浙江大学学报(工学版), 2025, 59(6): 1130-1139.
[5] 杨燕,晁丽鹏. 基于多维协同注意力的双支特征联合去雾网络[J]. 浙江大学学报(工学版), 2025, 59(6): 1119-1129.
[6] 蔡永青,韩成,权巍,陈兀迪. 基于注意力机制的视觉诱导晕动症评估模型[J]. 浙江大学学报(工学版), 2025, 59(6): 1110-1118.
[7] 李宗民,徐畅,白云,鲜世洋,戎光彩. 面向点云理解的双邻域图卷积方法[J]. 浙江大学学报(工学版), 2025, 59(5): 879-889.
[8] 张自然,李锵,关欣. 基于卷积辅助自注意力的胸部疾病分类网络[J]. 浙江大学学报(工学版), 2025, 59(5): 890-901.
[9] 覃浩宇,过洁,张浩南,冯泽森,浦亮,张嘉伟,郭延文. 基于光流重投影的高性能轻量级帧外插技术[J]. 浙江大学学报(工学版), 2025, 59(5): 902-911.
[10] 张梦瑶,周杰,李文婷,赵勇. 结合全局信息和局部信息的三维网格分割框架[J]. 浙江大学学报(工学版), 2025, 59(5): 912-919.
[11] 刘俊婧,郑宛露,郭子强,王少荣. 多方引导前景增强的行人重识别方法[J]. 浙江大学学报(工学版), 2025, 59(5): 929-937.
[12] 吴晓佳,杨金龙,赵豪豪. 优化多核相关滤波的弱小目标检测前跟踪算法[J]. 浙江大学学报(工学版), 2025, 59(5): 947-955.
[13] 陈赞,李冉,冯远静,李永强. 基于时间维超分辨率的视频快照压缩成像重构[J]. 浙江大学学报(工学版), 2025, 59(5): 956-963.
[14] 董镇滔,徐暟敏,万清颖,刘晓菲,申昊,李书涵,奇格奇. 基于交通事件短视频资源的多模态情绪特征分析[J]. 浙江大学学报(工学版), 2025, 59(4): 661-668.
[15] 李沈崇,曾新华,林传渠. 基于轴向注意力的多任务自动驾驶环境感知算法[J]. 浙江大学学报(工学版), 2025, 59(4): 769-777.