Special Topic: Crop Phenotyping Technologies and Applications |
|
|
|
|
Maize tassel segmentation based on deep learning method and RGB image |
Xun YU1,2(),Zhe WANG1,Haitao JING1(),Xiuliang JIN2(),Chenwei NIE2,Yi BAI2,Zheng WANG3 |
1.School of Surveying and Land Information Engineering, Henan Polytechnic University, Jiaozuo 454000, Henan, China 2.Institute of Crop Sciences, Chinese Academy of Agricultural Sciences, Beijing 100081, China 3.College of Energy, Chengdu University of Technology, Chengdu 610059, China |
|
|
Abstract This study focuses on the accuracy and stability of deep learning method for maize tassel segmentation at different tasseling stages and varieties. The RGB images were collected in the experimental base of Chinese Academy of Agricultural Sciences in Xinxiang City of Henan Province from July to September in 2019, and four models, PspNet, DeepLab V3+, SegNet and U-Net, based on the lightweight network as the feature extraction layer, were applied to compare the accuracy of different models for maize tassel segmentation. Then, the U-Net model with the best segmentation accuracy (mIoU=0.780) was selected to segment the maize tassel in different varieties at different tasseling stages. The results showed that the accuracy of U-Net model at different tasseling stages was generally better (mIoU=0.703 to 0.798), and the segmentation accuracy of maize tassel in fully emerged tassel stage was the highest (mIoU=0.798); the segmentation accuracy of maize tassel in different varieties was significantly different, but the average segmentation accuracy of maize tassel for all varieties was higher (mIoU=0.749), and the segmentation accuracy of Zhengdan 958 (ZD958) was the highest (mIoU=0.814). In summary, the U-Net model has good universality and robustness for maize tassel segmentation, which provides an effective method for tassel monitoring in maize phenotypic test in the future.
|
Received: 12 March 2021
Published: 02 September 2021
|
|
Corresponding Authors:
Haitao JING,Xiuliang JIN
E-mail: 211804020012@home.hpu.edu.cn;jht@hpu.edu.cn;jinxiuliang@caas.cn
|
基于深度学习方法和RGB影像的玉米雄穗分割
为检验深度学习方法对不同品种玉米雄穗在不同生育时期的分割精度和稳定性,利用2019年7月—9月于河南省新乡市中国农业科学院试验基地内采集的RGB影像,通过构建以轻量级网络为特征提取层的PspNet、DeepLab V3+、SegNet和U-Net 4种模型,比较不同模型对玉米雄穗分割精度的差异。结果显示:U-Net模型对不同生育时期玉米品种的雄穗分割精度最高(mIoU=0.780)。该模型在玉米雄穗不同生长阶段的分割精度总体上较好(mIoU=0.703~0.798),其中在完全抽雄期的分割精度最高(mIoU=0.798);U-Net模型对不同玉米品种的雄穗分割精度差异明显,但对所有玉米品种雄穗的平均分割精度较高(mIoU=0.749),其中对郑单958(ZD958)的分割精度最高(mIoU=0.814)。表明U-Net模型对玉米雄穗分割具有较好的普适性与鲁棒性,为今后玉米表型试验中对雄穗的监测提供了一种有效的方法。
关键词:
RGB影像,
深度学习,
特征提取层,
玉米雄穗,
分割
|
|
[1] |
JIN X L, ZARCO-TEJADA P, SCHMIDHALTER U, et al. High-throughput estimation of crop traits: a review of ground and aerial phenotyping platforms. IEEE Geoscience and Remote Sensing Magazine, 2021,9(1):200-231. DOI:10.1109/mgrs.2020.2998816
doi: 10
|
|
|
[2] |
ARAUS J L, CAIRNS J E. Field high-throughput phenotyping: the new crop breeding frontier. Trends in Plant Science, 2014,19(1):52-61. DOI:10.1016/j.tplants.2013.09.008
doi: 10.1016/j.tplants.2013.09.008
|
|
|
[3] |
DEERY D, JIMENEZ-BERNI J, JONES H, et al. Proximal remote sensing buggies and potential applications for field-based phenotyping. Agronomy, 2014,4(3):349-379. DOI:10.3390/agronomy4030349
doi: 10
|
|
|
[4] |
LU H, CAO Z G, XIAO Y, et al. Region-based colour modelling for joint crop and maize tassel segmentation. Biosystems Engineering, 2016,147:139-150. DOI:10.1016/j.biosystemseng.2016.04.007
doi: 10.1016/j.biosystemseng.2016.04.007
|
|
|
[5] |
HINTON G E, SALAKHUTDINOV R R. Reducing the dimensionality of data with neural networks. Science, 2006,313(5786):504-507. DOI:10.1126/science.1127647
doi: 10.1126/science.1127647
|
|
|
[6] |
LECUN Y, BENGIO Y, HINTON G. Deep learning. Nature, 2015,521(7553):436-444. DOI:10.1038/nature14539
doi: 10.1038/nature14539
|
|
|
[7] |
MESNIL G, DAUPHIN Y, GLOROT X, et al. Unsupervised and transfer learning challenge: a deep learning approach//JMLR Workshop and Conference Proceedings. [S. l.: s. n.], 2012:97-110.
|
|
|
[8] |
DONAHUE J, JIA Y Q, VINYALS O, et al. DeCAF: a deep convolutional activation feature for generic visual recognition//Proceedings of the 31st International Conference on International Conference on Machine Learning. [S. l.: s. n.], 2014:647-655.
|
|
|
[9] |
RAZAVIAN A S, AZIZPOUR H, SULLIVAN J, et al. CNN features off-the-shelf: an astounding baseline for recognition. New York, U. S.: IEEE, 2014. DOI:10.1109/CVPRW.2014.131
doi: 10.1109/CVPRW.2014.131
|
|
|
[10] |
YOSINSKI J, CLUNE J, BENGIO Y, et al. How trans-ferable are features in deep neural networks?//Advances in Neural Information Processing Stystems. 2014:3320-3328[2021-03-10].
|
|
|
[11] |
ZHANG M Y, JIANG Y, LI C Y, et al. Fully convolutional networks for blueberry bruising and calyx segmentation using hyperspectral transmittance imaging. Biosystems Engineering, 2020,192:159-175. DOI:10.1016/j.biosystemseng.2020.01.018
doi: 10.1016/j.biosystemseng.2020.01.018
|
|
|
[12] |
NI X P, LI C Y, JIANG H Y, et al. Three-dimensional photogrammetry with deep learning instance segmentation to extract berry fruit harvestability traits. ISPRS Journal of Photogrammetry and Remote Sensing, 2021,171:297-309. DOI:10.1016/j.isprsjprs.2020.11.010
doi: 10.1016/j.isprsjprs.2020.11.010
|
|
|
[13] |
GE Y Y, XIONG Y, TENORIO G L, et al. Fruit localization and environment perception for strawberry harvesting robots. IEEE Access, 2019(99):1-11. DOI:10.1109/ACCESS.2019.2946369
doi: 10.1109/ACCESS.2019.2946369
|
|
|
[14] |
WU J T, YANG G J, YANG H, et al. Extracting apple tree crown information from remote imagery using deep learning. Computers and Electronics in Agriculture, 2020,174:105504. DOI:10.1016/j.compag.2020.105504
doi: 10.1016/j.compag.2020.105504
|
|
|
[15] |
HUETHER P, SCHANDRY N, JANDRASITS K, et al. ARADEEPOPSIS, an automated workflow for top-view plant phenomics using semantic segmentation of leaf states. The Plant Cell, 2020,32(12):3674-3688. DOI:10.1105/tpc.20.00318
doi: 10.1105/tpc.20.00318
|
|
|
[16] |
OH M H, OLSEN P, RAMAMURTHY K N. Counting and segmenting sorghum heads. [2021-03-10].
|
|
|
[17] |
BADRINARAYANAN V, HANDA A, CIPOLLA R. SegNet: a deep convolutional encoder-decoder architecture for robust semantic pixel-wise labelling. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2017,39(12):2481-2495. DOI:10.1109/TPAMI.2016.2644615
doi: 10.1109/TPAMI.2016.2644615
|
|
|
[18] |
ZHAO H S, SHI J P, QI X J, et al. Pyramid scene parsing network//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). New York, U. S.: IEEE, 2017:6230-6239. DOI:10.1109/CVPR.2017.660
doi: 10.1109/CVPR.2017.660
|
|
|
[19] |
CHEN L C, ZHU Y K, PAPANDREOU G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation//European Conference on Computer Vision (ECCV). Cham, Switzerland: Springer, 2018:833-851. DOI:10.1007/978-3-030-01234-2_49
doi: 10.1007/978-3-030-01234-2_49
|
|
|
[20] |
RONNEBERGER O, FISCHER P, BROX T. U-Net: convolutional networks for biomedical image segmentation//International Conference on Medical Image Computing and Computer-Assisted Intervention-MICCAI. Cham, Switzerland: Springer, 2015:234-241. DOI:10.1007/978-3-319-24574-4_28
doi: 10.1007/978-3-319-24574-4_28
|
|
|
[21] |
HOWARD A G, ZHU M L, CHEN B, et al. MobileNets: efficient convolutional neural networks for mobile vision applications. [2021-03-10].
|
|
|
[22] |
CHOLLET F. Xception: deep learning with depthwise separable convolutions//2017 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York, U.S.: IEEE, 2017. DOI:10.1109/CVPR.2017.195
doi: 10.1109/CVPR.2017.195
|
|
|
[23] |
LU H, CAO Z G, XIAO Y, et al. Towards fine-grained maize tassel flowering status recognition: dataset, theory and practice. Applied Soft Computing, 2017,56:34-45. DOI:10.1016/j.asoc.2017.02.026
doi: 10.1016/j.asoc.2017.02.026
|
|
|
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
|
Shared |
|
|
|
|
|
Discussed |
|
|
|
|