Please wait a minute...
Journal of ZheJiang University (Engineering Science)  2023, Vol. 57 Issue (12): 2430-2438    DOI: 10.3785/j.issn.1008-973X.2023.12.010
    
Multi-objective classification method of nursery scene based on 3D laser point cloud
Hui LIU(),Xiu-li WANG,Yue SHEN,Jie XU
School of Electrical and Information Engineering, Jiangsu University, Zhenjiang 212013, China
Download: HTML     PDF(2064KB) HTML
Export: BibTeX | EndNote (RIS)      

Abstract  

To improve the accuracy of multi-target classification and recognition tasks on the nursery scene, a multi-objective classification method based on fusion convolutional block attention module and PointNet++ was proposed. The attention mechanism module was embedded in the feature extraction layer of the original PointNet++ network to enhance the extraction of key features and weaken the useless features, the parameter numbers were reduced while the network feature learning capability was improved. The LeakyReLu function was introduced as the activation function model, and good nonlinear transformation effects were obtained. To verify the classification performance of the proposed method on the nursery scene, a dataset was created using laser to acquire multiple objects (trees with different forms) and non-objects (pedestrians, signs, planting pots, etc.) from the nursery, and the classification experiments were conducted in the dataset. Experimental results showed that the proposed method achieved an overall classification accuracy of 96.38% and the classification speed reached 0.04 frame/s in the dataset, both of which were better than the original PointNet++ network’s.



Key wordsagricultural robot      deep learning      point cloud      target classification      PointNet++      attention mechanism     
Received: 27 February 2023      Published: 27 December 2023
CLC:  TP 391  
Fund:  国家自然科学基金资助项目(32171908);江苏高校优势学科资助项目(PAPD)
Cite this article:

Hui LIU,Xiu-li WANG,Yue SHEN,Jie XU. Multi-objective classification method of nursery scene based on 3D laser point cloud. Journal of ZheJiang University (Engineering Science), 2023, 57(12): 2430-2438.

URL:

https://www.zjujournals.com/eng/10.3785/j.issn.1008-973X.2023.12.010     OR     https://www.zjujournals.com/eng/Y2023/V57/I12/2430


基于三维激光点云的苗圃场景多目标分类方法

为了提升苗圃场景中多目标分类和识别的准确率,提出基于融合卷积块注意力模块的PointNet++多目标分类方法. 通过在原PointNet++网络的特征提取层中嵌入注意力机制模块,增强对关键特征的提取,弱化无用特征,降低参数量的同时提高网络特征学习能力. 引入LeakyReLu函数作为激活函数模型,获得的非线性变换效果良好. 为了验证所提方法在苗圃中的分类性能,使用激光采集多种靶标(不同形态的树木)与非靶标(行人、指示牌、种植盆等)制作数据集,并在该数据集上进行分类实验. 实验结果表明,所提方法在苗圃数据集上的分类总体精度达到96.38%,分类速度达到0.04帧/s,均优于原PointNet++网络.


关键词: 农业机器人,  深度学习,  点云,  目标分类,  PointNet++,  注意力机制 
Fig.1 Nursery point cloud data acquisition scene
参数 数值 参数 数值
量程/m 260 视场角/(°) 81.7×25.1
距离误差/cm 2 数据率/(bit·s?1) 240 000
角度误差/(°) 0.05 温度/℃ ?40~85
Tab.1 Specifications of Livox Horizon laser scanner
Fig.2 Partial sample map of nursery datasets
Fig.3 Pre-processed sample point cloud of nursery datasets
Fig.4 Point cloud changes before and after jittering
类别 na nb 类别 na nb
高树 907 1 250 灌木 471 663
行人 206 412 其他 171 342
枯树 225 450 指示牌 120 240
Tab.2 Number of different types of samples before and after amplification
Fig.5 PointNet++ classification network model
Fig.6 Improved PointNet++ multi-object classification network
网络 Acc/%
高树 行人 枯树 灌木 指示牌 其他
PointNet 91.73 90.04 61.44 88.81 85.33 84.83
PointNet++ 95.82 96.08 90.93 98.13 97.50 88.46
改进PointNet++ 97.36 94.26 93.21 98.87 95.00 92.59
Tab.3 Comparison of classification results of different networks for labels in nursery datasets
模型 OA/% mAcc/% v/(帧·s?1
PointNet 88.36 86.72 0.01
PointNet++ 94.29 93.64 0.07
PointNet++(SENet) 94.53 94.08 0.06
改进PointNet++ 96.38 95.86 0.04
Tab.4 Comparison of classification results of different networks in nursery datasets
Fig.7 Training accuracy and loss function curve of each network in nursery datasets
Fig.8 Training accuracy curve and loss function curve of each network in ModelNet40 datasets
模型 OA/% mAcc/%
PointNet 89.97 85.83
PointNet++ 91.91 88.61
改进PointNet++ 92.87 89.82
Tab.5 Comparison of classification results of each network in ModelNet40 datasets
%
PointNet++ CBAM LeakyReLu 苗圃数据集 ModelNet40
OA mAcc OA mAcc
94.29 93.64 91.91 88.61
95.91 95.47 92.69 89.12
94.74 94.05 92.01 89.20
96.38 95.86 92.87 89.82
Tab.6 Ablation experiments in nursery datasets and ModelNet40 datasets
[1]   JIN Y, LIU J, XU Z, et al Development status and trend of agricultural robot technology[J]. International Journal of Agricultural and Biological Engineering, 2021, 14 (4): 1- 19
[2]   郑永军, 陈炳太, 吕昊暾, 等 中国果园植保机械化技术与装备研究进展[J]. 农业工程学报, 2020, 36 (20): 110- 124
ZHENG Yong-jun, CHEN Bing-tai, Lyu Hao-tun, et al Research progress of orchard plant protection mechanization technology and equipment in China[J]. Transactions of the Chinese Society of Agricultural Engineering, 2020, 36 (20): 110- 124
doi: 10.11975/j.issn.1002-6819.2020.20.014
[3]   刘双喜, 徐春保, 张宏建, 等 果园基肥施肥装备研究现状与发展分析[J]. 农业机械学报, 2020, 51 (Suppl.2): 99- 108
LIU Shuang-xi, XU Chun-bao, ZHANG Hong-jian, et al Research status and development analysis of base-fertilizer application equipment of orchard[J]. Transactions of the Chinese Society for Agricultural Machinery, 2020, 51 (Suppl.2): 99- 108
[4]   江世界, 马恒涛, 杨圣慧, 等 果园喷雾机器人靶标探测与追踪系统[J]. 农业工程学报, 2021, 37 (9): 31- 39
JIANG Shi-jie, MA Heng-tao, YANG Sheng-hui, et al Target detection and tracking system for orchard spraying robots[J]. Transactions of the Chinese Society of Agricultural Engineering, 2021, 37 (9): 31- 39
doi: 10.11975/j.issn.1002-6819.2021.09.004
[5]   LI Y, LI M, QI J, et al Detection of typical obstacles in orchards based on deep convolutional neural network[J]. Computers and Electronics in Agriculture, 2021, 181 (8): 105932
[6]   MESHRAM A T, VANALKAR A V, KALAMBE K B, et al Pesticide spraying robot for precision agriculture: a categorical literature review and future trends[J]. Journal of Field Robotics, 2022, 39 (2): 153- 171
doi: 10.1002/rob.22043
[7]   CHEN X, ZHANG B, LUO L Multi-feature fusion tree trunk detection and orchard mobile robot localization using camera/ultrasonic sensors[J]. Computers and Electronics in Agriculture, 2018, 147: 91- 108
doi: 10.1016/j.compag.2018.02.009
[8]   白帆, 郑慧峰, 沈平平, 等 基于花朵特征编码归类的植物种类识别方法[J]. 浙江大学学报: 工学版, 2015, 49 (10): 1902- 1908
BAI Fan, ZHENG Hui-feng, SHEN Ping-ping, et al Plant species identification method based on flower feature coding classification[J]. Journal of Zhejiang University, 2015, 49 (10): 1902- 1908
[9]   LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot multibox detector [C]// European Conference on Computer Vision. [S.l.]: Springer, 2016: 21-37.
[10]   REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 779-788.
[11]   REDMON J, FARHADI A. YOLO9000: better, faster, stronger [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 7263-7271.
[12]   BOCHKOVSKIY A, WANG C Y, LIAO H Y M. YOLOv4: optimal speed and accuracy of object detection [EB/OL]. (2020-04-23)[2022-09-26]. https://arxiv.org/pdf/2004.10934.pdf.
[13]   刘慧, 张礼帅, 沈跃, 等 基于改进SSD的果园行人实时检测方法[J]. 农业机械学报, 2019, 50 (4): 29- 35
LIU Hui, ZHANG Li-shuai, SHEN Yue, et al Real-time pedestrian detection in orchard based on improved SSD[J]. Transactions of the Chinese Society for Agricultural Machinery, 2019, 50 (4): 29- 35
[14]   蔡舒平, 孙仲鸣, 刘慧, 等 基于改进型YOLOv4的果园障碍物实时检测方法[J]. 农业工程学报, 2021, 37 (2): 36- 43
CAI Shu-ping, SUN Zhong-ming, LIU Hui, et al Real-time detection methodology for obstacles in orchards using improved YOLOv4[J]. Transactions of the Chinese Society of Agricultural Engineering, 2021, 37 (2): 36- 43
[15]   NIEMEYER J, ROTTENSTEINER F, SOERGEL U Contextual classification of lidar data and building object detection in urban areas[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2014, 87: 152- 165
doi: 10.1016/j.isprsjprs.2013.11.001
[16]   MALLET C, BRETAR F, ROUX M, et al Relevance assessment of full-waveform lidar data for urban area classification[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2011, 66 (6): S71- S84
doi: 10.1016/j.isprsjprs.2011.09.008
[17]   苏伟, 郭皓, 赵冬玲, 等 基于地基激光雷达的玉米真实叶面积提取方法研究[J]. 农业机械学报, 2016, 47 (7): 345- 353
SU Wei, GUO Hao, ZHAO Dong-ling, et al Estimation of actual leaf area of maize based on terrestrial laser scanning[J]. Transactions of the Chinese Society for Agricultural Machinery, 2016, 47 (7): 345- 353
[18]   SU H, MAJI S, KALOGERAKIS E, et al. Multi-view convolutional neural networks for 3D shape recognition [C]// Proceedings of the IEEE International Conference on Computer Vision. Santiago: IEEE, 2015: 945-953.
[19]   张满囤, 燕明晓, 马英石, 等 基于八叉树结构的三维体素模型检索[J]. 计算机学报, 2021, 44 (2): 334- 346
ZHANG Man-dun, YAN Ming-xiao, MA Ying-shi, et al 3D voxel model retrieval based on octree structure[J]. Chinese Journal of Computers, 2021, 44 (2): 334- 346
[20]   MATURANA D, SCHERER S. VoxNet: a 3D convolutional neural network for real-time object recognition [C]// 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems. Hamburg: IEEE, 2015: 922-928.
[21]   QI C R, SU H, MO K, et al. PointNet: deep learning on point sets for 3D classification and segmentation [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 652-660.
[22]   QI C R, YI L, SU H, et al. PointNet++: deep hierarchical feature learning on point sets in a metric space [C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: [s.n.], 2017: 5105-5114.
[23]   王浩云, 肖海鸿, 马仕航, 等 基于MRE-PointNet+AE的绿萝叶片外形参数估测算法[J]. 农业机械学报, 2021, 52 (1): 146- 153
WANG Hao-yun, XIAO Hai-hong, MA Shi-hang, et al Estimation algorithm of leaf shape parameters of scirpus sibiricum based on MRE-PointNet and autoencoder model[J]. Transactions of the Chinese Society for Agricultural Machinery, 2021, 52 (1): 146- 153
doi: 10.6041/j.issn.1000-1298.2021.01.016
[24]   JAYAKUMARI R, NIDAMANURI R R, RAMIYA A M Object-level classification of vegetable crops in 3D LiDAR point cloud using deep learning convolutional neural networks[J]. Precision Agriculture, 2021, 22 (5): 1617- 1633
doi: 10.1007/s11119-021-09803-0
[25]   QIAN Y, XU Q, YANG Y, et al Classification of rice seed variety using point cloud data combined with deep learning[J]. International Journal of Agricultural and Biological Engineering, 2021, 14 (5): 206- 212
doi: 10.25165/j.ijabe.20211405.5902
[26]   钟昌源, 胡泽林, 李淼, 等 基于分组注意力模块的实时农作物病害叶片语义分割模型[J]. 农业工程学报, 2021, 37 (4): 208- 215
ZHONG Chang-yuan, HU Ze-lin, LI Miao, et al Real-time semantic segmentation model for crop disease leaves using group attention module[J]. Transactions of the Chinese Society of Agricultural Engineering, 2021, 37 (4): 208- 215
[27]   林森, 刘美怡, 陶志勇 采用注意力机制与改进YOLOv5的水下珍品检测[J]. 农业工程学报, 2021, 37 (18): 307- 314
LIN Sen, LIU Mei-yi, TAO Zhi-yong Detection of underwater treasures using attention mechanism and improved YOLOv5[J]. Transactions of the Chinese Society of Agricultural Engineering, 2021, 37 (18): 307- 314
[28]   曾安, 彭杰威, 刘畅, 等 基于多尺度几何感知Transformer的植物点云补全网络[J]. 农业工程学报, 2022, 38 (4): 198- 205
ZENG An, PENG Jie-wei, LIU Chang, et al Plant point cloud completion network based on multi-scale geometry-aware point Transformer[J]. Transactions of the Chinese Society of Agricultural Engineering, 2022, 38 (4): 198- 205
doi: 10.11975/j.issn.1002-6819.2022.04.023
[29]   WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module [C]// Proceedings of the European Conference on Computer Vision. [S.l.]: Springer, 2018: 3-19.
[1] Siyi QIN,Shaoyan GAI,Feipeng DA. Video object detection algorithm based on multi-level feature aggregation under mixed sampler[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(1): 10-19.
[2] Xuefei SUN,Ruifeng ZHANG,Xin GUAN,Qiang LI. Lightweight and efficient human pose estimation with enhanced priori skeleton structure[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(1): 50-60.
[3] Chao-hao ZHENG,Zhi-wei YIN,Gang-feng ZENG,Yue-ping XU,Peng ZHOU,Li LIU. Post-processing of numerical precipitation forecast based on spatial-temporal deep learning model[J]. Journal of ZheJiang University (Engineering Science), 2023, 57(9): 1756-1765.
[4] Hai-feng LI,Xue-ying ZHANG,Shu-fei DUAN,Hai-rong JIA,Hui-zhi LIANG. Fusing generative adversarial network and temporal convolutional network for Mandarin emotion recognition[J]. Journal of ZheJiang University (Engineering Science), 2023, 57(9): 1865-1875.
[5] Xiao-qiang ZHAO,Ze WANG,Zhao-yang SONG,Hong-mei JIANG. Image super-resolution reconstruction based on dynamic attention network[J]. Journal of ZheJiang University (Engineering Science), 2023, 57(8): 1487-1494.
[6] Hui-xin WANG,Xiang-rong TONG. Research progress of recommendation system based on knowledge graph[J]. Journal of ZheJiang University (Engineering Science), 2023, 57(8): 1527-1540.
[7] Xiu-lan SONG,Zhao-hang DONG,Hang-guan SHAN,Wei-jie LU. Vehicle trajectory prediction based on temporal-spatial multi-head attention mechanism[J]. Journal of ZheJiang University (Engineering Science), 2023, 57(8): 1636-1643.
[8] Xiao-yan LI,Peng WANG,Jia GUO,Xue LI,Meng-yu SUN. Multi branch Siamese network target tracking based on double attention mechanism[J]. Journal of ZheJiang University (Engineering Science), 2023, 57(7): 1307-1316.
[9] Zhe YANG,Hong-wei GE,Ting LI. Framework of feature fusion and distribution with mixture of experts for parallel recommendation algorithm[J]. Journal of ZheJiang University (Engineering Science), 2023, 57(7): 1317-1325.
[10] Yun-hong LI,Jiao-jiao DUAN,Xue-ping SU,Lei-tao ZHANG,Hui-kang YU,Xing-rui LIU. Calligraphy generation algorithm based on improved generative adversarial network[J]. Journal of ZheJiang University (Engineering Science), 2023, 57(7): 1326-1334.
[11] Wei QUAN,Yong-qing CAI,Chao WANG,Jia SONG,Hong-kai SUN,Lin-xuan LI. VR sickness estimation model based on 3D-ResNet two-stream network[J]. Journal of ZheJiang University (Engineering Science), 2023, 57(7): 1345-1353.
[12] Xin-lei ZHOU,Hai-ting GU,Jing LIU,Yue-ping XU,Fang GENG,Chong WANG. Daily water supply prediction method based on integrated learning and deep learning[J]. Journal of ZheJiang University (Engineering Science), 2023, 57(6): 1120-1127.
[13] Pei-feng LIU,Lu QIAN,Xing-wei ZHAO,Bo TAO. Continual learning framework of named entity recognition in aviation assembly domain[J]. Journal of ZheJiang University (Engineering Science), 2023, 57(6): 1186-1194.
[14] Jun HAN,Xiao-ping YUAN,Zhun WANG,Ye CHEN. UAV dense small target detection algorithm based on YOLOv5s[J]. Journal of ZheJiang University (Engineering Science), 2023, 57(6): 1224-1233.
[15] Jia-chi ZHAO,Tian-qi WANG,Li-fang ZENG,Xue-ming SHAO. Rapid prediction of unsteady aerodynamic characteristics of flapping wing based on GRU[J]. Journal of ZheJiang University (Engineering Science), 2023, 57(6): 1251-1256.