Please wait a minute...
浙江大学学报(工学版)  2024, Vol. 58 Issue (9): 1801-1810    DOI: 10.3785/j.issn.1008-973X.2024.09.005
计算机与控制工程     
基于图像识别的弓网接触点检测方法
李凡1,2(),杨杰1,2,*(),冯志成1,2,陈智超1,2,付云骁3
1. 江西理工大学 电气工程与自动化学院,江西 赣州 341000
2. 江西省磁悬浮技术重点实验室,江西 赣州 341000
3. 中车工业研究院有限公司,北京 100070
Pantograph-catenary contact point detection method based on image recognition
Fan LI1,2(),Jie YANG1,2,*(),Zhicheng FENG1,2,Zhichao CHEN1,2,Yunxiao FU3
1. School of Electrical Engineering and Automation, Jiangxi University of Science and Technology, Ganzhou 341000, China
2. Jiangxi Provincial Key Laboratory of Maglev Technology, Ganzhou 341000, China
3. CRRC Industrial Institute Co. Ltd, Beijing 100070, China
 全文: PDF(2826 KB)   HTML
摘要:

针对现有受电弓-接触网(弓网)接触点检测方法无法兼顾实时性与准确性的问题,提出两阶段快速检测方法. 在第1阶段提出基于改进BiSeNet v2的弓网区域分割算法. 采用浅层特征共享机制将细节分支提取的浅层特征送入语义分支中获取高层语义信息,减少冗余参数;将压缩激励注意力模块嵌入网络中,增强重要通道信息;加入金字塔池化模块提取多尺度特征,提高模型精度. 在第2阶段,基于分割结果,使用直线拟合和位置校正实现接触点的检测. 实验结果表明,所提分割算法精度为87.50%,浮点运算数为6.73 G ,在CPU (Intel Core I9-12900)和JETSON TX2上推理速度分别为49.80、12.60帧/s. 所提检测方法在弓网仿真平台和双源智能重卡的弓网系统中进行实验,实验结果表明,该方法能够有效检测弓网接触点.

关键词: 语义分割BiSeNet v2直线拟合受电弓-接触网系统深度学习    
Abstract:

A two-stage fast detection method was proposed aiming at the poor real-time performance and low accuracy of existing pantograph-catenary contact points detection methods. In the first stage, a pantograph-catenary region segmentation algorithm was proposed based on the improved BiSeNet v2. The shallow feature sharing mechanism was used to send the shallow features extracted from the detail branch to the semantic branch to obtain the high-level semantic information and reduce the redundant parameters. The Squeeze-and-Excitation Attention Mechanism was embedded into the network model to enhance the important channel information. The Pyramid Pooling Module was added to obtain the multi-scale features to improve the accuracy of the model. In the second stage, based on the segmentation results, contact points detection was achieved by the linear fitting and the position correction. The experimental results showed that the proposed segmentation algorithm had an accuracy of 87.50%, floating point operations of 6.73 G, and an inference speed of 49.80 frames per second and 12.60 frames per second on CPU (Intel Core I9-12900) and JETSON TX2. The proposed detection method was experimented in the pantograph-catenary simulation platform and the pantograph-catenary system of the dual-source intelligent heavy truck. The experimental results showed that the method can effectively detect the contact points of the pantograph-catenary.

Key words: semantic segmentation    BiSeNet v2    linear fitting    pantograph-catenary system    deep learning
收稿日期: 2023-08-09 出版日期: 2024-08-30
CLC:  U 229  
基金资助: 国家自然科学基金资助项目(62063009).
通讯作者: 杨杰     E-mail: 1978634998@qq.com;yangjie@jxust.edu.cn
作者简介: 李凡(2001—),男,硕士生,从事电气化公路安全监测研究. orcid.org/0009-0000-3358-4522. E-mail:1978634998@qq.com
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
作者相关文章  
李凡
杨杰
冯志成
陈智超
付云骁

引用本文:

李凡,杨杰,冯志成,陈智超,付云骁. 基于图像识别的弓网接触点检测方法[J]. 浙江大学学报(工学版), 2024, 58(9): 1801-1810.

Fan LI,Jie YANG,Zhicheng FENG,Zhichao CHEN,Yunxiao FU. Pantograph-catenary contact point detection method based on image recognition. Journal of ZheJiang University (Engineering Science), 2024, 58(9): 1801-1810.

链接本文:

https://www.zjujournals.com/eng/CN/10.3785/j.issn.1008-973X.2024.09.005        https://www.zjujournals.com/eng/CN/Y2024/V58/I9/1801

图 1  改进BiSeNet v2网络整体结构图
输入尺寸操作NSNp
512×512×3Stem161
128×128×16GE_SE3221
64×64×32GE_SE3211
64×64×32GE_SE6421
32×32×64GE_SE6411
32×32×64GE_SE12821
16×16×128GE_SE12813
表 1  改进BiSeNet v2网络结构表
图 2  快速下采样结构图
图 3  挤压与激励注意力机制
图 4  GE_SE结构图
图 5  引导聚合结构图
图 6  金字塔池化模块
图 7  分割头结构图
图 8  接触点检测过程
分支特征
共享
SEPPMPPRPPCRCmIOU/%P/MFLOPs/GFPS/帧
细节
分支
语义
分支
I9-12900 (CPU)JETSON
TX2
0.95010.94550.82950.861687.523.3412.3533.2810.50
0.93780.94830.66310.621281.652.866.6049.8313.30
0.94960.94270.85210.786386.062.866.6249.6812.90
0.94850.93900.79400.731583.233.426.6048.5312.50
0.94740.94250.79510.815683.912.986.6050.6513.60
0.95560.94020.83640.853887.504.576.7349.8012.60
表 2  消融实验结果对比
图 9  PMSE-BiSeNet和主流模型的弓网分割效果对比
模型基础网络结构mIOU/%P/MFLOPs/GFPS/帧
I9-12900(CPU)JETSON TX2
DeepLab v3[19]ResNet5067.4741.81171.092.68
PSPNet[20]ResNet5080.3146.71184.742.69
DenseAspp[21]Densenet12180.939.1743.098.39
EncNet[29]ResNet5080.4033.60147.082.30
Fcn8s[14]Vgg1680.1630.02320.875.85
BiSeNet v1[22]ResNet1885.4512.8013.0323.056.68
BiSeNet v2[23]87.523.3412.3533.2810.50
PMSE-BiSeNet87.504.576.7349.8012.60
表 3  PMSE-BiSeNet和主流模型在弓网数据集中的结果对比
模型基础网络结构第2阶段算法FPS/帧
I9-12900 (CPU)
JETSON TX2
BiSeNet v1[22]ResNet1820.474.50
BiSeNet v2[23]29.757.20
PMSE-BiSeNet48.3610.25
表 4  部分检测算法实时测试结果
图 10  弓网实验平台上的检测效果
图 11  在国内首台新型双源智能重卡上的应用流程
图 12  在国内首台新型双源智能重卡的检测效果
1 邵丽青, 易钶 我国电动重卡市场发展现状[J]. 专用汽车, 2022, (10): 1- 3
SHAO Liqing, YI Ke Development status of electric heavy truck market in China[J]. Special Purpose Vehicle, 2022, (10): 1- 3
2 杨卢强, 韩通新, 王志良 高速动车组受电弓安全检测的研究[J]. 铁道运输与经济, 2017, 39 (8): 66- 71
YANG Luqiang, HAN Tongxin, WANG Zhiliang Study on safety detection of high-speed emu pantograph[J]. Railway Transport and Economy, 2017, 39 (8): 66- 71
3 零碳排放!我国首款双源智能重卡成功下线[EB/OL]. (2023-03-14) [2023-08-01]. https://news.bjd.com.cn/2023/03/14/10364351.shtml.
4 周宁, 杨文杰, 刘久锐, 等 基于受电弓状态感知的弓网安全监测系统研究与探讨[J]. 中国科学: 技术科学, 2021, 51 (1): 23- 34
ZHOU Ning, YANG Wenjie, LlU Jiurui, et al Investigation of a pantograph-catenary monitoring system using condition-based pantograph recognition[J]. Scientia Sinica: Technologica, 2021, 51 (1): 23- 34
doi: 10.1360/SST-2019-0282
5 KARAKOSE E, GENCOGLU M T, KARAKOSE M, et al A new experimental approach using image processing-based tracking for an efficient fault diagnosis in pantograph-catenary systems[J]. IEEE Transactions on Industrial Informatics, 2017, 13 (2): 635- 643
doi: 10.1109/TII.2016.2628042
6 AYDIN I, KARAKOSE M, AKIN E A new contactless fault diagnosis approach for pantograph-catenary system using pattern recognition and image processing methods[J]. Advances in Electrical and Computer Engineering, 2014, 14 (3): 79- 88
doi: 10.4316/AECE.2014.03010
7 范虎伟, 卞春华, 朱挺, 等 非接触式接触网定位器坡度自动检测技术[J]. 计算机应用, 2010, 30 (Suppl.2): 102- 103
FAN Huwei, BIAN Chunhua, ZHU Ting, et al Automatic detection of positioning line in contactless overhead contact system[J]. Journal of Computer Applications, 2010, 30 (Suppl.2): 102- 103
8 张桂南, 刘志刚. 基于角点匹配与谱聚类的接触网绝缘子破损/夹杂异物故障检测[J]. 仪器仪表学报, 2014, 35(6): 1370−1377.
ZHANG Guinan, LIU Zhigang. Fault detection of catenary insulator damage/foreign material based on corner matching and spectral clustering [J]. Chinese Journal of Scientific Instrument , 2014, 35(6): 1370−1376.
9 ZHANG D, GAO S, YU L, et al A robust pantograph-catenary interaction condition monitoring method based on deep convolutional network[J]. IEEE Transactions on Instrumentation and Measurement, 2019, 69 (5): 1920- 1929
10 CHEN R, LIN Y, JIN T High-speed railway pantograph-catenary anomaly detection method based on depth vision neural network[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 1- 10
11 YANG X, ZHOU N, LIU Y, et al Online pantograph-catenary contact point detection in complicated background based on multiple strategies[J]. IEEE Access, 2020, 8: 220394- 220407
doi: 10.1109/ACCESS.2020.3042535
12 张乔木, 钟倩文, 孙明, 等 复杂环境下弓网接触位置动态监测方法研究[J]. 电子科技, 2022, 35 (8): 66- 72
ZHANG Qiaomu, ZHONG Qianwan, SUN Ming, et al Research on dynamic monitoring method of pantograph net contact position in complex environment[J]. Electronic Science and Technology, 2022, 35 (8): 66- 72
13 王恩鸿, 柴晓冬, 钟倩文, 等 基于视频图像的弓网接触位置动态监测方法[J]. 城市轨道交通研究, 2021, 24 (7): 198- 203
WANG Enhong, CHAI Xiaodong, ZHONG Qianwen, et al Dynamic monitoring method of pantograph-caternary contact position based on video image[J]. Urban Mass Transit, 2021, 24 (7): 198- 203
14 LONG J, SHELHAMER E, DARRELL T. Fully convolutional networks for semantic segmentation [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . Boston: IEEE, 2015: 3431−3440.
15 RONNEBERGER O, FISCHER P, BROX T. U-net: convolutional networks for biomedical image segmentation [C]// Medical Image Computing and Computer-Assisted Intervention MICCAI 2015: 18th International Conference . Munich: Springer, 2015: 234−241.
16 HOWARD A, SANDLER M, CHU G, et al. Searching for mobilenetv3 [C]// International Conference on Computer Vision . Seoul: IEEE, 2019: 1314−1324.
17 CHEN L C, PAPANDREOU G, KOKKINOS I, et al. Semantic image segmentation with deep convolutional nets and fully connected CRFs [EB/OL]. (2014−12-22) [2023-08-01]. https://arxiv.org/abs/ 1412.7062.
18 CHEN L C, PAPANDREOU G, KOKKINOS I, et al Deeplab semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 40 (4): 834- 848
19 CHEN L C, ZHU Y, PAPANDREOU G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation [C]// Proceedings of the European Conference on Computer Vision . Munich: Springer, 2018: 801−818.
20 ZHAO H, SHI J, QI X, et al. Pyramid scene parsing network [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . Honolulu: IEEE, 2017: 2881−2890.
21 YANG M, YU K, ZHANG C, et al. DenseASPP for semantic segmentation in street scenes [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . Salt Lake City: IEEE, 2018, 3684−3692.
22 YU C, WANG J, PENG C, et al. BiSeNet: bilateral seg-mentation network for real-time semantic segmentation [C]// Proceedings of the European Conference on Computer Vision . Munich : Springer, 2018: 325−341.
23 YU C, GAO C, WANG J, et al BiSeNet v2: bilateral network with guided aggregation for real-time semantic segmentation[J]. International Journal of Computer Vision, 2021, 129: 3051- 3068
doi: 10.1007/s11263-021-01515-2
24 HU J, SHEN L, SUN G. Squeeze-and-excitation networks [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . Salt Lake City: IEEE, 2018: 7132−7141.
25 SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition [EB/OL]. (2015-04-10) [2023-08-01] . http://arxiv.org/abs/1409.1556.
26 任凤雷, 杨璐, 周海波, 等. 基于改进BiSeNet的实时图像语义分割[J]. 光学精密工程, 2023, 31(8): 1217−1227.
REN Fenglei, YANG Lu, ZHOU Haibo, et al. Real-time semantic segmentation based on improved BiSeNet[J] Optics and Precision Engineering, 2023, 31(8): 1217−1227.
27 XU Q, MA Y, WU J, et al. Faster BiSeNet: a faster bilateral segmentation network for real-time semantic segmentation [C]// 2021 International Joint Conference on Neural Networks . Shenzhen: IEEE, 2021: 1−8.
28 陈智超, 焦海宁, 杨杰, 等 基于改进MobileNet v2的垃圾图像分类算法[J]. 浙江大学学报: 工学版, 2021, 55 (8): 1490- 1499
CHEN Zhichao, JIAO Haining, YANG Jie, et al Garbage image classification algorithm based on improved MobileNet v2[J]. Journal of Zhejiang University: Engineering Science, 2021, 55 (8): 1490- 1499
[1] 吴书晗,王丹,陈远方,贾子钰,张越棋,许萌. 融合注意力的滤波器组双视图图卷积运动想象脑电分类[J]. 浙江大学学报(工学版), 2024, 58(7): 1326-1335.
[2] 李林睿,王东升,范红杰. 基于法条知识的事理型类案检索方法[J]. 浙江大学学报(工学版), 2024, 58(7): 1357-1365.
[3] 马现伟,范朝辉,聂为之,李东,朱逸群. 对失效传感器具备鲁棒性的故障诊断方法[J]. 浙江大学学报(工学版), 2024, 58(7): 1488-1497.
[4] 杨军,张琛. 基于边界点估计与稀疏卷积神经网络的三维点云语义分割[J]. 浙江大学学报(工学版), 2024, 58(6): 1121-1132.
[5] 宋娟,贺龙喜,龙会平. 基于深度学习的隧道衬砌多病害检测算法[J]. 浙江大学学报(工学版), 2024, 58(6): 1161-1173.
[6] 刘毅,陈一丹,高琳,洪姣. 基于多尺度特征融合的轻量化道路提取模型[J]. 浙江大学学报(工学版), 2024, 58(5): 951-959.
[7] 魏翠婷,赵唯坚,孙博超,刘芸怡. 基于改进Mask R-CNN与双目视觉的智能配筋检测[J]. 浙江大学学报(工学版), 2024, 58(5): 1009-1019.
[8] 钟博,王鹏飞,王乙乔,王晓玲. 基于深度学习的EEG数据分析技术综述[J]. 浙江大学学报(工学版), 2024, 58(5): 879-890.
[9] 范康,钟铭恩,谭佳威,詹泽辉,冯妍. 联合语义分割和深度估计的交通场景感知算法[J]. 浙江大学学报(工学版), 2024, 58(4): 684-695.
[10] 宦海,盛宇,顾晨曦. 基于遥感图像道路提取的全局指导多特征融合网络[J]. 浙江大学学报(工学版), 2024, 58(4): 696-707.
[11] 罗向龙,王亚飞,王彦博,王立新. 基于双向门控式宽度学习系统的监测数据结构变形预测[J]. 浙江大学学报(工学版), 2024, 58(4): 729-736.
[12] 宋明俊,严文,邓益昭,张俊然,涂海燕. 轻量化机器人抓取位姿实时检测算法[J]. 浙江大学学报(工学版), 2024, 58(3): 599-610.
[13] 钱庆杰,余军合,战洪飞,王瑞,胡健. 基于DL-BiGRU多特征融合的注塑件尺寸预测方法[J]. 浙江大学学报(工学版), 2024, 58(3): 646-654.
[14] 李灿林,张文娇,邵志文,马利庄,王新玥. 基于Trans-nightSeg的夜间道路场景语义分割方法[J]. 浙江大学学报(工学版), 2024, 58(2): 294-303.
[15] 姚鑫骅,于涛,封森文,马梓健,栾丛丛,沈洪垚. 基于图神经网络的零件机加工特征识别方法[J]. 浙江大学学报(工学版), 2024, 58(2): 349-359.