Please wait a minute...
浙江大学学报(工学版)  2023, Vol. 57 Issue (12): 2430-2438    DOI: 10.3785/j.issn.1008-973X.2023.12.010
计算机技术     
基于三维激光点云的苗圃场景多目标分类方法
刘慧(),王秀丽,沈跃,徐婕
江苏大学 电气信息工程学院,江苏 镇江 212013
Multi-objective classification method of nursery scene based on 3D laser point cloud
Hui LIU(),Xiu-li WANG,Yue SHEN,Jie XU
School of Electrical and Information Engineering, Jiangsu University, Zhenjiang 212013, China
 全文: PDF(2064 KB)   HTML
摘要:

为了提升苗圃场景中多目标分类和识别的准确率,提出基于融合卷积块注意力模块的PointNet++多目标分类方法. 通过在原PointNet++网络的特征提取层中嵌入注意力机制模块,增强对关键特征的提取,弱化无用特征,降低参数量的同时提高网络特征学习能力. 引入LeakyReLu函数作为激活函数模型,获得的非线性变换效果良好. 为了验证所提方法在苗圃中的分类性能,使用激光采集多种靶标(不同形态的树木)与非靶标(行人、指示牌、种植盆等)制作数据集,并在该数据集上进行分类实验. 实验结果表明,所提方法在苗圃数据集上的分类总体精度达到96.38%,分类速度达到0.04帧/s,均优于原PointNet++网络.

关键词: 农业机器人深度学习点云目标分类PointNet++注意力机制    
Abstract:

To improve the accuracy of multi-target classification and recognition tasks on the nursery scene, a multi-objective classification method based on fusion convolutional block attention module and PointNet++ was proposed. The attention mechanism module was embedded in the feature extraction layer of the original PointNet++ network to enhance the extraction of key features and weaken the useless features, the parameter numbers were reduced while the network feature learning capability was improved. The LeakyReLu function was introduced as the activation function model, and good nonlinear transformation effects were obtained. To verify the classification performance of the proposed method on the nursery scene, a dataset was created using laser to acquire multiple objects (trees with different forms) and non-objects (pedestrians, signs, planting pots, etc.) from the nursery, and the classification experiments were conducted in the dataset. Experimental results showed that the proposed method achieved an overall classification accuracy of 96.38% and the classification speed reached 0.04 frame/s in the dataset, both of which were better than the original PointNet++ network’s.

Key words: agricultural robot    deep learning    point cloud    target classification    PointNet++    attention mechanism
收稿日期: 2023-02-27 出版日期: 2023-12-27
CLC:  TP 391  
基金资助: 国家自然科学基金资助项目(32171908);江苏高校优势学科资助项目(PAPD)
作者简介: 刘慧(1980—),女,教授,从事农业电气化与自动化、智能控制与信号处理研究. orcid.org/0000-0002-4282-7378.E-mail: amity@ujs.edu.cn
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
作者相关文章  
刘慧
王秀丽
沈跃
徐婕

引用本文:

刘慧,王秀丽,沈跃,徐婕. 基于三维激光点云的苗圃场景多目标分类方法[J]. 浙江大学学报(工学版), 2023, 57(12): 2430-2438.

Hui LIU,Xiu-li WANG,Yue SHEN,Jie XU. Multi-objective classification method of nursery scene based on 3D laser point cloud. Journal of ZheJiang University (Engineering Science), 2023, 57(12): 2430-2438.

链接本文:

https://www.zjujournals.com/eng/CN/10.3785/j.issn.1008-973X.2023.12.010        https://www.zjujournals.com/eng/CN/Y2023/V57/I12/2430

图 1  苗圃点云数据采集场景
参数 数值 参数 数值
量程/m 260 视场角/(°) 81.7×25.1
距离误差/cm 2 数据率/(bit·s?1) 240 000
角度误差/(°) 0.05 温度/℃ ?40~85
表 1  Livox Horizon激光扫描仪的设备参数
图 2  苗圃数据集部分样本图
图 3  苗圃数据集预处理后的样本点云
图 4  抖动前、后点云变化
类别 na nb 类别 na nb
高树 907 1 250 灌木 471 663
行人 206 412 其他 171 342
枯树 225 450 指示牌 120 240
表 2  扩增前、后不同类别样本数量
图 5  PointNet++分类网络模型
图 6  改进的PointNet++多目标分类网络
网络 Acc/%
高树 行人 枯树 灌木 指示牌 其他
PointNet 91.73 90.04 61.44 88.81 85.33 84.83
PointNet++ 95.82 96.08 90.93 98.13 97.50 88.46
改进PointNet++ 97.36 94.26 93.21 98.87 95.00 92.59
表 3  不同网络在苗圃数据集上的标签分类结果对比
模型 OA/% mAcc/% v/(帧·s?1
PointNet 88.36 86.72 0.01
PointNet++ 94.29 93.64 0.07
PointNet++(SENet) 94.53 94.08 0.06
改进PointNet++ 96.38 95.86 0.04
表 4  不同网络在苗圃数据集上的分类结果对比
图 7  各网络在苗圃数据集上的训练准确率和损失函数曲线
图 8  各网络在ModelNet40数据集上的训练准确率曲线和损失函数曲线
模型 OA/% mAcc/%
PointNet 89.97 85.83
PointNet++ 91.91 88.61
改进PointNet++ 92.87 89.82
表 5  各网络在ModelNet40数据集上的分类结果对比
%
PointNet++ CBAM LeakyReLu 苗圃数据集 ModelNet40
OA mAcc OA mAcc
94.29 93.64 91.91 88.61
95.91 95.47 92.69 89.12
94.74 94.05 92.01 89.20
96.38 95.86 92.87 89.82
表 6  苗圃数据集与ModelNet40数据集上的消融实验
1 JIN Y, LIU J, XU Z, et al Development status and trend of agricultural robot technology[J]. International Journal of Agricultural and Biological Engineering, 2021, 14 (4): 1- 19
2 郑永军, 陈炳太, 吕昊暾, 等 中国果园植保机械化技术与装备研究进展[J]. 农业工程学报, 2020, 36 (20): 110- 124
ZHENG Yong-jun, CHEN Bing-tai, Lyu Hao-tun, et al Research progress of orchard plant protection mechanization technology and equipment in China[J]. Transactions of the Chinese Society of Agricultural Engineering, 2020, 36 (20): 110- 124
doi: 10.11975/j.issn.1002-6819.2020.20.014
3 刘双喜, 徐春保, 张宏建, 等 果园基肥施肥装备研究现状与发展分析[J]. 农业机械学报, 2020, 51 (Suppl.2): 99- 108
LIU Shuang-xi, XU Chun-bao, ZHANG Hong-jian, et al Research status and development analysis of base-fertilizer application equipment of orchard[J]. Transactions of the Chinese Society for Agricultural Machinery, 2020, 51 (Suppl.2): 99- 108
4 江世界, 马恒涛, 杨圣慧, 等 果园喷雾机器人靶标探测与追踪系统[J]. 农业工程学报, 2021, 37 (9): 31- 39
JIANG Shi-jie, MA Heng-tao, YANG Sheng-hui, et al Target detection and tracking system for orchard spraying robots[J]. Transactions of the Chinese Society of Agricultural Engineering, 2021, 37 (9): 31- 39
doi: 10.11975/j.issn.1002-6819.2021.09.004
5 LI Y, LI M, QI J, et al Detection of typical obstacles in orchards based on deep convolutional neural network[J]. Computers and Electronics in Agriculture, 2021, 181 (8): 105932
6 MESHRAM A T, VANALKAR A V, KALAMBE K B, et al Pesticide spraying robot for precision agriculture: a categorical literature review and future trends[J]. Journal of Field Robotics, 2022, 39 (2): 153- 171
doi: 10.1002/rob.22043
7 CHEN X, ZHANG B, LUO L Multi-feature fusion tree trunk detection and orchard mobile robot localization using camera/ultrasonic sensors[J]. Computers and Electronics in Agriculture, 2018, 147: 91- 108
doi: 10.1016/j.compag.2018.02.009
8 白帆, 郑慧峰, 沈平平, 等 基于花朵特征编码归类的植物种类识别方法[J]. 浙江大学学报: 工学版, 2015, 49 (10): 1902- 1908
BAI Fan, ZHENG Hui-feng, SHEN Ping-ping, et al Plant species identification method based on flower feature coding classification[J]. Journal of Zhejiang University, 2015, 49 (10): 1902- 1908
9 LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot multibox detector [C]// European Conference on Computer Vision. [S.l.]: Springer, 2016: 21-37.
10 REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 779-788.
11 REDMON J, FARHADI A. YOLO9000: better, faster, stronger [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 7263-7271.
12 BOCHKOVSKIY A, WANG C Y, LIAO H Y M. YOLOv4: optimal speed and accuracy of object detection [EB/OL]. (2020-04-23)[2022-09-26]. https://arxiv.org/pdf/2004.10934.pdf.
13 刘慧, 张礼帅, 沈跃, 等 基于改进SSD的果园行人实时检测方法[J]. 农业机械学报, 2019, 50 (4): 29- 35
LIU Hui, ZHANG Li-shuai, SHEN Yue, et al Real-time pedestrian detection in orchard based on improved SSD[J]. Transactions of the Chinese Society for Agricultural Machinery, 2019, 50 (4): 29- 35
14 蔡舒平, 孙仲鸣, 刘慧, 等 基于改进型YOLOv4的果园障碍物实时检测方法[J]. 农业工程学报, 2021, 37 (2): 36- 43
CAI Shu-ping, SUN Zhong-ming, LIU Hui, et al Real-time detection methodology for obstacles in orchards using improved YOLOv4[J]. Transactions of the Chinese Society of Agricultural Engineering, 2021, 37 (2): 36- 43
15 NIEMEYER J, ROTTENSTEINER F, SOERGEL U Contextual classification of lidar data and building object detection in urban areas[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2014, 87: 152- 165
doi: 10.1016/j.isprsjprs.2013.11.001
16 MALLET C, BRETAR F, ROUX M, et al Relevance assessment of full-waveform lidar data for urban area classification[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2011, 66 (6): S71- S84
doi: 10.1016/j.isprsjprs.2011.09.008
17 苏伟, 郭皓, 赵冬玲, 等 基于地基激光雷达的玉米真实叶面积提取方法研究[J]. 农业机械学报, 2016, 47 (7): 345- 353
SU Wei, GUO Hao, ZHAO Dong-ling, et al Estimation of actual leaf area of maize based on terrestrial laser scanning[J]. Transactions of the Chinese Society for Agricultural Machinery, 2016, 47 (7): 345- 353
18 SU H, MAJI S, KALOGERAKIS E, et al. Multi-view convolutional neural networks for 3D shape recognition [C]// Proceedings of the IEEE International Conference on Computer Vision. Santiago: IEEE, 2015: 945-953.
19 张满囤, 燕明晓, 马英石, 等 基于八叉树结构的三维体素模型检索[J]. 计算机学报, 2021, 44 (2): 334- 346
ZHANG Man-dun, YAN Ming-xiao, MA Ying-shi, et al 3D voxel model retrieval based on octree structure[J]. Chinese Journal of Computers, 2021, 44 (2): 334- 346
20 MATURANA D, SCHERER S. VoxNet: a 3D convolutional neural network for real-time object recognition [C]// 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems. Hamburg: IEEE, 2015: 922-928.
21 QI C R, SU H, MO K, et al. PointNet: deep learning on point sets for 3D classification and segmentation [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 652-660.
22 QI C R, YI L, SU H, et al. PointNet++: deep hierarchical feature learning on point sets in a metric space [C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: [s.n.], 2017: 5105-5114.
23 王浩云, 肖海鸿, 马仕航, 等 基于MRE-PointNet+AE的绿萝叶片外形参数估测算法[J]. 农业机械学报, 2021, 52 (1): 146- 153
WANG Hao-yun, XIAO Hai-hong, MA Shi-hang, et al Estimation algorithm of leaf shape parameters of scirpus sibiricum based on MRE-PointNet and autoencoder model[J]. Transactions of the Chinese Society for Agricultural Machinery, 2021, 52 (1): 146- 153
doi: 10.6041/j.issn.1000-1298.2021.01.016
24 JAYAKUMARI R, NIDAMANURI R R, RAMIYA A M Object-level classification of vegetable crops in 3D LiDAR point cloud using deep learning convolutional neural networks[J]. Precision Agriculture, 2021, 22 (5): 1617- 1633
doi: 10.1007/s11119-021-09803-0
25 QIAN Y, XU Q, YANG Y, et al Classification of rice seed variety using point cloud data combined with deep learning[J]. International Journal of Agricultural and Biological Engineering, 2021, 14 (5): 206- 212
doi: 10.25165/j.ijabe.20211405.5902
26 钟昌源, 胡泽林, 李淼, 等 基于分组注意力模块的实时农作物病害叶片语义分割模型[J]. 农业工程学报, 2021, 37 (4): 208- 215
ZHONG Chang-yuan, HU Ze-lin, LI Miao, et al Real-time semantic segmentation model for crop disease leaves using group attention module[J]. Transactions of the Chinese Society of Agricultural Engineering, 2021, 37 (4): 208- 215
27 林森, 刘美怡, 陶志勇 采用注意力机制与改进YOLOv5的水下珍品检测[J]. 农业工程学报, 2021, 37 (18): 307- 314
LIN Sen, LIU Mei-yi, TAO Zhi-yong Detection of underwater treasures using attention mechanism and improved YOLOv5[J]. Transactions of the Chinese Society of Agricultural Engineering, 2021, 37 (18): 307- 314
28 曾安, 彭杰威, 刘畅, 等 基于多尺度几何感知Transformer的植物点云补全网络[J]. 农业工程学报, 2022, 38 (4): 198- 205
ZENG An, PENG Jie-wei, LIU Chang, et al Plant point cloud completion network based on multi-scale geometry-aware point Transformer[J]. Transactions of the Chinese Society of Agricultural Engineering, 2022, 38 (4): 198- 205
doi: 10.11975/j.issn.1002-6819.2022.04.023
29 WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module [C]// Proceedings of the European Conference on Computer Vision. [S.l.]: Springer, 2018: 3-19.
[1] 秦思怡,盖绍彦,达飞鹏. 混合采样下多级特征聚合的视频目标检测算法[J]. 浙江大学学报(工学版), 2024, 58(1): 10-19.
[2] 孙雪菲,张瑞峰,关欣,李锵. 强化先验骨架结构的轻量型高效人体姿态估计[J]. 浙江大学学报(工学版), 2024, 58(1): 50-60.
[3] 郑超昊,尹志伟,曾钢锋,许月萍,周鹏,刘莉. 基于时空深度学习模型的数值降水预报后处理[J]. 浙江大学学报(工学版), 2023, 57(9): 1756-1765.
[4] 李海烽,张雪英,段淑斐,贾海蓉,Huizhi Liang . 融合生成对抗网络与时间卷积网络的普通话情感识别[J]. 浙江大学学报(工学版), 2023, 57(9): 1865-1875.
[5] 赵小强,王泽,宋昭漾,蒋红梅. 基于动态注意力网络的图像超分辨率重建[J]. 浙江大学学报(工学版), 2023, 57(8): 1487-1494.
[6] 王慧欣,童向荣. 融合知识图谱的推荐系统研究进展[J]. 浙江大学学报(工学版), 2023, 57(8): 1527-1540.
[7] 宋秀兰,董兆航,单杭冠,陆炜杰. 基于时空融合的多头注意力车辆轨迹预测[J]. 浙江大学学报(工学版), 2023, 57(8): 1636-1643.
[8] 李晓艳,王鹏,郭嘉,李雪,孙梦宇. 基于双注意力机制的多分支孪生网络目标跟踪[J]. 浙江大学学报(工学版), 2023, 57(7): 1307-1316.
[9] 杨哲,葛洪伟,李婷. 特征融合与分发的多专家并行推荐算法框架[J]. 浙江大学学报(工学版), 2023, 57(7): 1317-1325.
[10] 李云红,段姣姣,苏雪平,张蕾涛,于惠康,刘杏瑞. 基于改进生成对抗网络的书法字生成算法[J]. 浙江大学学报(工学版), 2023, 57(7): 1326-1334.
[11] 权巍,蔡永青,王超,宋佳,孙鸿凯,李林轩. 基于3D-ResNet双流网络的VR病评估模型[J]. 浙江大学学报(工学版), 2023, 57(7): 1345-1353.
[12] 周欣磊,顾海挺,刘晶,许月萍,耿芳,王冲. 基于集成学习与深度学习的日供水量预测方法[J]. 浙江大学学报(工学版), 2023, 57(6): 1120-1127.
[13] 刘沛丰,钱璐,赵兴炜,陶波. 航空装配领域中命名实体识别的持续学习框架[J]. 浙江大学学报(工学版), 2023, 57(6): 1186-1194.
[14] 韩俊,袁小平,王准,陈烨. 基于YOLOv5s的无人机密集小目标检测算法[J]. 浙江大学学报(工学版), 2023, 57(6): 1224-1233.
[15] 赵嘉墀,王天琪,曾丽芳,邵雪明. 基于GRU的扑翼非定常气动特性快速预测[J]. 浙江大学学报(工学版), 2023, 57(6): 1251-1256.