Please wait a minute...
Journal of Zhejiang University (Agriculture and Life Sciences)  2023, Vol. 49 Issue (6): 881-892    DOI: 10.3785/j.issn.1008-9209.2022.10.181
Agricultural engineering     
Classification of Fritillaria thunbergii appearance quality based on machine vision and machine learning technology
Chengye DONG(),Dongfang LI,Huaiqu FENG,Sifang LONG,Te XI,Qin’an ZHOU,Jun WANG()
College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou 310058, Zhejiang, China
Download: HTML   HTML (   PDF(8568KB)
Export: BibTeX | EndNote (RIS)      

Abstract  

In order to classify the appearance quality level of Fritillaria thunbergii, the F. thunbergii dataset was constructed with the DigiEye system followed by an image annotation tool. Several statistical learning and object detection algorithms were selected to train and test the F. thunbergii dataset. The results showed that the model trained by the YOLO-X of YOLO (you only look once) series had relatively better performance. In addition, to optimize YOLO-X, according to the unique features of F. thunbergii dataset, a dilated convolution structure was embedded into the end of the backbone feature extraction network of YOLO-X as it could improve the model sensitivity to the dimension feature. The mean average precision (mAP) of the improved model was raised to 99.01%; the average precision (AP) for superfine, level one, level two, moth-eaten, mildewed, and broken F. thunbergii were raised to 99.97%, 98.33%, 98.47%, 98.71%, 99.73%, and 98.85%, respectively; and the weighted harmonic mean of precision and recall (F1) were raised to 0.99, 0.92, 0.94, 0.97, 0.99, and 0.97, respectively. The tune-up in this study enhanced the detection performance of the model without increasing the number of parameters, computational complexity, or major changes to the original model. This study provides a scientific basis for the subsequent construction of F. thunbergii detection platform.



Key wordsFritillaria thunbergii      statistical learning      deep learning      object detection      object detection algorithm YOLO-X      dilated convolution     
Received: 18 October 2022      Published: 25 December 2023
CLC:  TP391.4  
Corresponding Authors: Jun WANG     E-mail: DongChengye@zju.edu.cn;jwang@zju.edu.cn
Cite this article:

Chengye DONG,Dongfang LI,Huaiqu FENG,Sifang LONG,Te XI,Qin’an ZHOU,Jun WANG. Classification of Fritillaria thunbergii appearance quality based on machine vision and machine learning technology. Journal of Zhejiang University (Agriculture and Life Sciences), 2023, 49(6): 881-892.

URL:

https://www.zjujournals.com/agr/10.3785/j.issn.1008-9209.2022.10.181     OR     https://www.zjujournals.com/agr/Y2023/V49/I6/881


基于机器视觉和机器学习技术的浙贝母外观品质等级区分

为区分浙贝母外观品质等级,本研究利用数字电子眼系统及图像标注工具构建浙贝母数据集,选择若干统计学习算法和目标检测算法在该数据集上进行训练与测试。结果表明:目标检测算法YOLO(you only look once)系列YOLO-X所得模型的效果最佳。为优化YOLO-X,根据浙贝母数据集的特点,针对性地向YOLO-X的主干特征提取网络末端嵌入空洞卷积结构,以加强模型对尺度特征的敏感度。改进后模型(空洞率为4)的平均精确率均值为99.01%,对于特级、一级、二级、虫蛀、霉变、破碎浙贝母的平均精确率分别为99.97%、98.33%、98.47%、98.71%、99.73%、98.85%,精确率和召回率的加权调和平均数(F1)分别为0.99、0.92、0.94、0.97、0.99、0.97。本研究在不增加参数量、计算量或者对算法进行大规模改动的情况下,改善了模型的检测效果,为后续浙贝母检测平台的搭建提供了科学依据。


关键词: 浙贝母,  统计学习,  深度学习,  目标检测,  目标检测算法YOLO-X,  空洞卷积 
Fig. 1 Partial images of data sets (A1-A6) and image acquisition equipments (B)A1-A6. Superfine, level one, level two, mildewed, moth-eaten, and broken F. thunbergii, respectively.
Fig. 2 Schematic diagram of the research method

预选算法

Preselective

algorithm

主干特征提取网络

Backbone feature

extraction network

主干特征提取网络特点

Characteristics of backbone feature extraction network

YOLO-V3DarkNet-53应用DarkNet结构和LeakyReLU激活函数
YOLO-V4CSPDarkNet-53在DarkNet结构中添加CSPNet结构,变为CSPDarkNet结构,应用Mish激活函数
YOLO-V5CSPDarkNet-53在CSPDarkNet结构中添加Focus和空间金字塔池化(SPP)结构,应用SiLU激活函数
YOLO-XCSPDarkNet-53在CSPDarkNet结构中添加Focus和SPP结构,应用SiLU激活函数
Table 1 Characteristics of backbone feature extraction network of YOLO series
Fig. 3 Schematic diagrams of YOLOX-DC structure (A) and its backbone feature extraction network inference process (B)

模型

Model

指标

Index

浙贝母品质 Quality of F. thunbergii

特级

Superfine

一级

Level one

二级

Level two

虫蛀

Moth-eaten

霉变

Mildewed

破碎

Broken

DTP/%95.0066.1470.8388.0677.1268.80
R/%64.4167.2080.1994.4077.7875.19
F10.770.670.750.910.780.72
A/%76.53
SVMP/%95.7684.4085.0499.2390.8386.32
R/%87.6095.9795.5898.4786.8475.23
F10.920.900.900.990.890.80
A/%90.28
YOLO-V3AP/%61.1440.0037.9097.6793.0495.48
F10.240.050.000.930.850.94
mAP/%70.87
FPS29.10
YOLO-V4AP/%92.9368.8688.3799.2594.8596.87
F10.850.760.850.920.920.95
mAP/%90.19
FPS42.33
YOLO-V5AP/%64.3284.2296.1898.9599.8996.20
F10.700.710.870.970.980.97
mAP/%89.96
FPS30.95
YOLO-XAP/%98.3972.2296.5998.8499.5398.81
F10.900.630.840.950.980.96
mAP/%94.06
FPS30.65
Faster R-CNNAP/%88.9169.7592.5298.9199.4898.61
F10.790.660.830.960.970.95
mAP/%91.36
FPS36.28
Table 2 Test results of models trained by preselective algorithms on the test set of F. thunbergii

空洞率

Dilated rate

指标

Index

浙贝母品质 Quality of F. thunbergii

特级

Superfine

一级

Level one

二级

Level two

虫蛀

Moth-eaten

霉变

Mildewed

破碎

Broken

2AP/%99.5193.5396.9797.8599.7797.48
F10.950.840.810.960.980.97
mAP/%97.52
FPS28.97
3AP/%99.3798.5297.6297.7599.9398.37
F10.970.940.930.970.990.95
mAP/%98.59
FPS29.18
4AP/%99.9798.3398.4798.7199.7398.85
F10.990.920.940.970.990.97
mAP/%99.01
FPS29.13
5AP/%98.6395.8394.8998.3499.8698.99
F10.940.810.890.960.980.97
mAP/%97.76
FPS29.02
6AP/%98.1694.3198.7798.6599.5597.11
F10.920.860.750.970.920.96
mAP/%97.76
FPS29.05
Table 3 Test results of models trained by YOLOX-DC with different dilated rates on the test set
Fig. 4 Prediction results by YOLO-XA-F. Single object image detection results; G-J. Multi-object image detection results. The red crosses at the bottom of the image represent false detections or missed detections during the model detection. The number in the label is the confidence of the model when detecting the class of F. thunbergii, and the same as in Fig. 5.
Fig. 5 Prediction results by YOLOX-DC (dilated rate=4)A-F. Single object image detection results; G-J. Multi-object image detection results.
Fig. 6 Intermediate activation based on the corresponding positions of single object images of F. thunbergii by YOLO-X and YOLOX-DC
Fig. 7 Intermediate activation based on the corresponding positions of multi-object images of F. thunbergii by YOLO-X and YOLOX-DC
[1]   黄政晖,章勇杰,倪忠进,等.丘陵山区浙贝母机械化生产模式的效益分析[J].南方农机,2021,52(16):1-3. DOI:10.3969/j.issn.1672-3872.2021.16.001
HUANG Z H, ZHANG Y J, NI Z J, et al. Benefit analysis of mechanized production mode of Fritillaria thunbergii in hilly and mountainous areas[J]. China Southern Agricultural Machinery, 2021, 52(16): 1-3. (in Chinese with English abstract)
doi: 10.3969/j.issn.1672-3872.2021.16.001
[2]   中华中医药学会. 中药材商品规格等级 浙贝母:T/ [S].北京:中国标准出版社,2018.
China Association of Chinese Medicine. Commercial Grades for Chinese Medicinal Materials—Fritillariae thunbergii Bulbus: T/CACM 1021.24—2018 [S]. Beijing: Standards Press of China, 2018. (in Chinese)
[3]   王金辉,田忠静,陈启永.贝母筛分机的设计与研究[J].安徽农学通报,2013,19(3):143-144. DOI:10.16377/j.cnki.issn1007-7731.2013.03.029
WANG J H, TIAN Z J, CHEN Q Y. Design of the Fritillaria sieving machine[J]. Anhui Agricultural Science Bulletin, 2013, 19(3): 143-144. (in Chinese with English abstract)
doi: 10.16377/j.cnki.issn1007-7731.2013.03.029
[4]   宋江,邱胜蓝.基于UG的平贝母等级筛分机设计[J].机电工程技术,2013,42(8):31-33. DOI:10.3969/j.issn.1009-9492.2013.08.007
SONG J, QIU S L. Design of Fritillaria ussuriensis grade screen vessel based on UG[J]. Mechanical & Electrical Engineering Technology, 2013, 42(8): 31-33. (in Chinese with English abstract)
doi: 10.3969/j.issn.1009-9492.2013.08.007
[5]   宋江,邱胜蓝.平贝母等级筛分机机架的有限元分析[J].机电工程技术,2014,43(8):68-69. DOI:10.3969/j.issn.1009-9492.2014.08.019
SONG J, QIU S L. The finite element analysis of Fritillaria ussuriensis grade screen vessel frame[J]. Mechanical & Electrical Engineering Technology, 2014, 43(8): 68-69. (in Chinese with English abstract)
doi: 10.3969/j.issn.1009-9492.2014.08.019
[6]   宋江,王明,刘丽华,等.振动式平贝母等级筛分机设计与试验[J].黑龙江八一农垦大学学报,2013,25(1):28-31. DOI:10.3969/j.issn.1002-2090.2013.01.006
SONG J, WANG M, LIU L H, et al. Design and test of vibration type Fritillaria ussuriensis grade screen vessel[J]. Journal of Heilongjiang Bayi Agricultural University, 2013, 25(1): 28-31. (in Chinese with English abstract)
doi: 10.3969/j.issn.1002-2090.2013.01.006
[7]   杨东,王纪华,陆安祥.肉品质量无损检测技术研究进展[J].食品安全质量检测学报,2015,6(10):4083-4090. DOI:10.19812/j.cnki.jfsq11-5956/ts.2015.10.054
YANG D, WANG J H, LU A X. Research development on non-destructive determination technology for meat product quality[J]. Journal of Food Safety & Quality, 2015, 6(10): 4083-4090. (in Chinese with English abstract)
doi: 10.19812/j.cnki.jfsq11-5956/ts.2015.10.054
[8]   NTURAMBIRWE J F I, OPARA U L. Machine learning applications to non-destructive defect detection in horticultural products[J]. Biosystems Engineering, 2020, 189: 60-83. DOI: 10.1016/j.biosystemseng.2019.11.011
doi: 10.1016/j.biosystemseng.2019.11.011
[9]   KAMILARIS A, PRENAFETA-BOLDÚ F X. Deep learning in agriculture: a survey[J]. Computers and Electronics in Agriculture, 2018, 147: 70-90. DOI: 10.1016/j.compag.2018.02.016
doi: 10.1016/j.compag.2018.02.016
[10]   REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016: 779-788. DOI: 10.1109/CVPR.2016.91
doi: 10.1109/CVPR.2016.91
[11]   REDMON J, FARHADI A. YOLO9000: better, faster, stronger[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE, 2017: 6517-6525. DOI: 10.1109/CVPR.2017.690
doi: 10.1109/CVPR.2017.690
[12]   REDMON J, FARHADI A. YOLOv3: an incremental improve-ment[CP/OL]. arXiv, 2018: abs/1804.02767.
[13]   BOCHKOVSKIY A, WANG C Y, LIAO H Y M. YOLOv4: optimal speed and accuracy of object detection[CP/OL]. arXiv, 2020: abs/2004.10934.
[14]   GE Z, LIU S T, WANG F, et al. YOLOX: exceeding YOLO series in 2021[CP/OL]. arXiv, 2021: abs/2107.08430.
[15]   GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, USA: IEEE, 2014: 580-587. DOI: 10.1109/CVPR.2014.81
doi: 10.1109/CVPR.2014.81
[16]   GIRSHICK R. Fast R-CNN[C]//2015 IEEE International Conference on Computer Vision (ICCV). Santiago, Chile: IEEE, 2015: 1440-1448. DOI: 10.1109/ICCV.2015.169
doi: 10.1109/ICCV.2015.169
[17]   REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2017, 39(6): 1137-1149. DOI: 10.1109/TPAMI.2016.2577031
doi: 10.1109/TPAMI.2016.2577031
[18]   TIAN Y N, YANG G D, WANG Z, et al. Apple detection during different growth stages in orchards using the improved YOLO-V3 model[J]. Computers and Electronics in Agriculture, 2019, 157: 417-426. DOI: 10.1016/j.compag.2019.01.012
doi: 10.1016/j.compag.2019.01.012
[19]   LÜ J D, XU H, HAN Y, et al. A visual identification method for the apple growth forms in the orchard[J]. Computers and Electronics in Agriculture, 2022, 197: 106954. DOI: 10.1016/j.compag.2022.106954
doi: 10.1016/j.compag.2022.106954
[20]   ZHANG Y C, ZHANG W B, YU J Y, et al. Complete and accurate holly fruits counting using YOLOX object detection[J]. Computers and Electronics in Agriculture, 2022, 198: 107062. DOI: 10.1016/j.compag.2022.107062
doi: 10.1016/j.compag.2022.107062
[21]   XU W K, ZHAO L G, LI J, et al. Detection and classifi-cation of tea buds based on deep learning[J]. Computers and Electronics in Agriculture, 2022, 192: 106547. DOI: 10.1016/j.compag.2021.106547
doi: 10.1016/j.compag.2021.106547
[22]   ZHENG T X, JIANG M Z, LI Y F, et al. Research on tomato detection in natural environment based on RC-YOLOv4[J]. Computers and Electronics in Agriculture, 2022, 198: 107029. DOI: 10.1016/j.compag.2022.107029
doi: 10.1016/j.compag.2022.107029
[23]   QI J T, LIU X N, LIU K, et al. An improved YOLOv5 model based on visual attention mechanism: application to recognition of tomato virus disease[J]. Computers and Electronics in Agriculture, 2022, 194: 106780. DOI: 10.1016/j.compag.2022.106780
doi: 10.1016/j.compag.2022.106780
[24]   ZHANG D Y, LUO H S, WANG D Y, et al. Assessment of the levels of damage caused by Fusarium head blight in wheat using an improved YoloV5 method[J]. Computers and Electronics in Agriculture, 2022, 198: 107086. DOI: 10.1016/j.compag.2022.107086
doi: 10.1016/j.compag.2022.107086
[25]   YAO Z S, LIU T, YANG T L, et al. Rapid detection of wheat ears in orthophotos from unmanned aerial vehicles in fields based on YOLOX[J]. Frontiers in Plant Science, 2022, 13: 851245. DOI: 10.3389/fpls.2022.851245
doi: 10.3389/fpls.2022.851245
[26]   LI X, PAN J D, XIE F P, et al. Fast and accurate green pepper detection in complex backgrounds via an improved Yolov4-tiny model[J]. Computers and Electronics in Agriculture, 2021, 191: 106503. DOI: 10.1016/j.compag.2021.106503
doi: 10.1016/j.compag.2021.106503
[27]   谢鹏尧,富昊伟,唐政,等.基于RGB图像的冠层尺度水稻叶瘟病斑检测与抗性评估[J].浙江大学学报(农业与生命科学版),2021,47(4):415-428. DOI:10.3785/j.issn.1008-9209.2021.05.131
XIE P Y, FU H W, TANG Z, et al. RGB imaging-based detection of rice leaf blast spot and resistance evaluation at the canopy scale[J]. Journal of Zhejiang University (Agriculture & Life Sciences), 2021, 47(4): 415-428. (in Chinese with English abstract)
doi: 10.3785/j.issn.1008-9209.2021.05.131
[1] Pengyao XIE,Haowei FU,Zheng TANG,Zhihong MA,Haiyan CEN. RGB imaging-based detection of rice leaf blast spot and resistance evaluation at the canopy scale[J]. Journal of Zhejiang University (Agriculture and Life Sciences), 2021, 47(4): 415-428.
[2] Xun YU,Zhe WANG,Haitao JING,Xiuliang JIN,Chenwei NIE,Yi BAI,Zheng WANG. Maize tassel segmentation based on deep learning method and RGB image[J]. Journal of Zhejiang University (Agriculture and Life Sciences), 2021, 47(4): 451-463.