Please wait a minute...
Journal of ZheJiang University (Engineering Science)  2024, Vol. 58 Issue (9): 1801-1810    DOI: 10.3785/j.issn.1008-973X.2024.09.005
    
Pantograph-catenary contact point detection method based on image recognition
Fan LI1,2(),Jie YANG1,2,*(),Zhicheng FENG1,2,Zhichao CHEN1,2,Yunxiao FU3
1. School of Electrical Engineering and Automation, Jiangxi University of Science and Technology, Ganzhou 341000, China
2. Jiangxi Provincial Key Laboratory of Maglev Technology, Ganzhou 341000, China
3. CRRC Industrial Institute Co. Ltd, Beijing 100070, China
Download: HTML     PDF(2826KB) HTML
Export: BibTeX | EndNote (RIS)      

Abstract  

A two-stage fast detection method was proposed aiming at the poor real-time performance and low accuracy of existing pantograph-catenary contact points detection methods. In the first stage, a pantograph-catenary region segmentation algorithm was proposed based on the improved BiSeNet v2. The shallow feature sharing mechanism was used to send the shallow features extracted from the detail branch to the semantic branch to obtain the high-level semantic information and reduce the redundant parameters. The Squeeze-and-Excitation Attention Mechanism was embedded into the network model to enhance the important channel information. The Pyramid Pooling Module was added to obtain the multi-scale features to improve the accuracy of the model. In the second stage, based on the segmentation results, contact points detection was achieved by the linear fitting and the position correction. The experimental results showed that the proposed segmentation algorithm had an accuracy of 87.50%, floating point operations of 6.73 G, and an inference speed of 49.80 frames per second and 12.60 frames per second on CPU (Intel Core I9-12900) and JETSON TX2. The proposed detection method was experimented in the pantograph-catenary simulation platform and the pantograph-catenary system of the dual-source intelligent heavy truck. The experimental results showed that the method can effectively detect the contact points of the pantograph-catenary.



Key wordssemantic segmentation      BiSeNet v2      linear fitting      pantograph-catenary system      deep learning     
Received: 09 August 2023      Published: 30 August 2024
CLC:  U 229  
  TP 391  
Fund:  国家自然科学基金资助项目(62063009).
Corresponding Authors: Jie YANG     E-mail: 1978634998@qq.com;yangjie@jxust.edu.cn
Cite this article:

Fan LI,Jie YANG,Zhicheng FENG,Zhichao CHEN,Yunxiao FU. Pantograph-catenary contact point detection method based on image recognition. Journal of ZheJiang University (Engineering Science), 2024, 58(9): 1801-1810.

URL:

https://www.zjujournals.com/eng/10.3785/j.issn.1008-973X.2024.09.005     OR     https://www.zjujournals.com/eng/Y2024/V58/I9/1801


基于图像识别的弓网接触点检测方法

针对现有受电弓-接触网(弓网)接触点检测方法无法兼顾实时性与准确性的问题,提出两阶段快速检测方法. 在第1阶段提出基于改进BiSeNet v2的弓网区域分割算法. 采用浅层特征共享机制将细节分支提取的浅层特征送入语义分支中获取高层语义信息,减少冗余参数;将压缩激励注意力模块嵌入网络中,增强重要通道信息;加入金字塔池化模块提取多尺度特征,提高模型精度. 在第2阶段,基于分割结果,使用直线拟合和位置校正实现接触点的检测. 实验结果表明,所提分割算法精度为87.50%,浮点运算数为6.73 G ,在CPU (Intel Core I9-12900)和JETSON TX2上推理速度分别为49.80、12.60帧/s. 所提检测方法在弓网仿真平台和双源智能重卡的弓网系统中进行实验,实验结果表明,该方法能够有效检测弓网接触点.


关键词: 语义分割,  BiSeNet v2,  直线拟合,  受电弓-接触网系统,  深度学习 
Fig.1 Overall network structure of improved BiSeNet v2
输入尺寸操作NSNp
512×512×3Stem161
128×128×16GE_SE3221
64×64×32GE_SE3211
64×64×32GE_SE6421
32×32×64GE_SE6411
32×32×64GE_SE12821
16×16×128GE_SE12813
Tab.1 Network structure of improved BiSeNet v2
Fig.2 Fast downsampling structure
Fig.3 Squeeze and excitation attention mechanism
Fig.4 GE_SE structure
Fig.5 Bilateral guided aggregation structure
Fig.6 Pyramid pooling module
Fig.7 Seg head structure
Fig.8 Contact point detection process
分支特征
共享
SEPPMPPRPPCRCmIOU/%P/MFLOPs/GFPS/帧
细节
分支
语义
分支
I9-12900 (CPU)JETSON
TX2
0.95010.94550.82950.861687.523.3412.3533.2810.50
0.93780.94830.66310.621281.652.866.6049.8313.30
0.94960.94270.85210.786386.062.866.6249.6812.90
0.94850.93900.79400.731583.233.426.6048.5312.50
0.94740.94250.79510.815683.912.986.6050.6513.60
0.95560.94020.83640.853887.504.576.7349.8012.60
Tab.2 Comparison of ablation study results
Fig.9 Comparison of pantograph-catenary segmentation results between PMSE-BiSeNet and mainstream models
模型基础网络结构mIOU/%P/MFLOPs/GFPS/帧
I9-12900(CPU)JETSON TX2
DeepLab v3[19]ResNet5067.4741.81171.092.68
PSPNet[20]ResNet5080.3146.71184.742.69
DenseAspp[21]Densenet12180.939.1743.098.39
EncNet[29]ResNet5080.4033.60147.082.30
Fcn8s[14]Vgg1680.1630.02320.875.85
BiSeNet v1[22]ResNet1885.4512.8013.0323.056.68
BiSeNet v2[23]87.523.3412.3533.2810.50
PMSE-BiSeNet87.504.576.7349.8012.60
Tab.3 Comparison of results between PMSE-BiSeNet and mainstream models in proposed pantograph-catenary data set
模型基础网络结构第2阶段算法FPS/帧
I9-12900 (CPU)
JETSON TX2
BiSeNet v1[22]ResNet1820.474.50
BiSeNet v2[23]29.757.20
PMSE-BiSeNet48.3610.25
Tab.4 Real-time test results of partial detection algorithms
Fig.10 Detection effect on pantograph-catenary experimental platform
Fig.11 Application process on first new dual-source intelligent heavy truck in China
Fig.12 Detection effect in first new dual-source intelligent heavy truck in China
[1]   邵丽青, 易钶 我国电动重卡市场发展现状[J]. 专用汽车, 2022, (10): 1- 3
SHAO Liqing, YI Ke Development status of electric heavy truck market in China[J]. Special Purpose Vehicle, 2022, (10): 1- 3
[2]   杨卢强, 韩通新, 王志良 高速动车组受电弓安全检测的研究[J]. 铁道运输与经济, 2017, 39 (8): 66- 71
YANG Luqiang, HAN Tongxin, WANG Zhiliang Study on safety detection of high-speed emu pantograph[J]. Railway Transport and Economy, 2017, 39 (8): 66- 71
[3]   零碳排放!我国首款双源智能重卡成功下线[EB/OL]. (2023-03-14) [2023-08-01]. https://news.bjd.com.cn/2023/03/14/10364351.shtml.
[4]   周宁, 杨文杰, 刘久锐, 等 基于受电弓状态感知的弓网安全监测系统研究与探讨[J]. 中国科学: 技术科学, 2021, 51 (1): 23- 34
ZHOU Ning, YANG Wenjie, LlU Jiurui, et al Investigation of a pantograph-catenary monitoring system using condition-based pantograph recognition[J]. Scientia Sinica: Technologica, 2021, 51 (1): 23- 34
doi: 10.1360/SST-2019-0282
[5]   KARAKOSE E, GENCOGLU M T, KARAKOSE M, et al A new experimental approach using image processing-based tracking for an efficient fault diagnosis in pantograph-catenary systems[J]. IEEE Transactions on Industrial Informatics, 2017, 13 (2): 635- 643
doi: 10.1109/TII.2016.2628042
[6]   AYDIN I, KARAKOSE M, AKIN E A new contactless fault diagnosis approach for pantograph-catenary system using pattern recognition and image processing methods[J]. Advances in Electrical and Computer Engineering, 2014, 14 (3): 79- 88
doi: 10.4316/AECE.2014.03010
[7]   范虎伟, 卞春华, 朱挺, 等 非接触式接触网定位器坡度自动检测技术[J]. 计算机应用, 2010, 30 (Suppl.2): 102- 103
FAN Huwei, BIAN Chunhua, ZHU Ting, et al Automatic detection of positioning line in contactless overhead contact system[J]. Journal of Computer Applications, 2010, 30 (Suppl.2): 102- 103
[8]   张桂南, 刘志刚. 基于角点匹配与谱聚类的接触网绝缘子破损/夹杂异物故障检测[J]. 仪器仪表学报, 2014, 35(6): 1370−1377.
ZHANG Guinan, LIU Zhigang. Fault detection of catenary insulator damage/foreign material based on corner matching and spectral clustering [J]. Chinese Journal of Scientific Instrument , 2014, 35(6): 1370−1376.
[9]   ZHANG D, GAO S, YU L, et al A robust pantograph-catenary interaction condition monitoring method based on deep convolutional network[J]. IEEE Transactions on Instrumentation and Measurement, 2019, 69 (5): 1920- 1929
[10]   CHEN R, LIN Y, JIN T High-speed railway pantograph-catenary anomaly detection method based on depth vision neural network[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 1- 10
[11]   YANG X, ZHOU N, LIU Y, et al Online pantograph-catenary contact point detection in complicated background based on multiple strategies[J]. IEEE Access, 2020, 8: 220394- 220407
doi: 10.1109/ACCESS.2020.3042535
[12]   张乔木, 钟倩文, 孙明, 等 复杂环境下弓网接触位置动态监测方法研究[J]. 电子科技, 2022, 35 (8): 66- 72
ZHANG Qiaomu, ZHONG Qianwan, SUN Ming, et al Research on dynamic monitoring method of pantograph net contact position in complex environment[J]. Electronic Science and Technology, 2022, 35 (8): 66- 72
[13]   王恩鸿, 柴晓冬, 钟倩文, 等 基于视频图像的弓网接触位置动态监测方法[J]. 城市轨道交通研究, 2021, 24 (7): 198- 203
WANG Enhong, CHAI Xiaodong, ZHONG Qianwen, et al Dynamic monitoring method of pantograph-caternary contact position based on video image[J]. Urban Mass Transit, 2021, 24 (7): 198- 203
[14]   LONG J, SHELHAMER E, DARRELL T. Fully convolutional networks for semantic segmentation [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . Boston: IEEE, 2015: 3431−3440.
[15]   RONNEBERGER O, FISCHER P, BROX T. U-net: convolutional networks for biomedical image segmentation [C]// Medical Image Computing and Computer-Assisted Intervention MICCAI 2015: 18th International Conference . Munich: Springer, 2015: 234−241.
[16]   HOWARD A, SANDLER M, CHU G, et al. Searching for mobilenetv3 [C]// International Conference on Computer Vision . Seoul: IEEE, 2019: 1314−1324.
[17]   CHEN L C, PAPANDREOU G, KOKKINOS I, et al. Semantic image segmentation with deep convolutional nets and fully connected CRFs [EB/OL]. (2014−12-22) [2023-08-01]. https://arxiv.org/abs/ 1412.7062.
[18]   CHEN L C, PAPANDREOU G, KOKKINOS I, et al Deeplab semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 40 (4): 834- 848
[19]   CHEN L C, ZHU Y, PAPANDREOU G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation [C]// Proceedings of the European Conference on Computer Vision . Munich: Springer, 2018: 801−818.
[20]   ZHAO H, SHI J, QI X, et al. Pyramid scene parsing network [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . Honolulu: IEEE, 2017: 2881−2890.
[21]   YANG M, YU K, ZHANG C, et al. DenseASPP for semantic segmentation in street scenes [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . Salt Lake City: IEEE, 2018, 3684−3692.
[22]   YU C, WANG J, PENG C, et al. BiSeNet: bilateral seg-mentation network for real-time semantic segmentation [C]// Proceedings of the European Conference on Computer Vision . Munich : Springer, 2018: 325−341.
[23]   YU C, GAO C, WANG J, et al BiSeNet v2: bilateral network with guided aggregation for real-time semantic segmentation[J]. International Journal of Computer Vision, 2021, 129: 3051- 3068
doi: 10.1007/s11263-021-01515-2
[24]   HU J, SHEN L, SUN G. Squeeze-and-excitation networks [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . Salt Lake City: IEEE, 2018: 7132−7141.
[25]   SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition [EB/OL]. (2015-04-10) [2023-08-01] . http://arxiv.org/abs/1409.1556.
[26]   任凤雷, 杨璐, 周海波, 等. 基于改进BiSeNet的实时图像语义分割[J]. 光学精密工程, 2023, 31(8): 1217−1227.
REN Fenglei, YANG Lu, ZHOU Haibo, et al. Real-time semantic segmentation based on improved BiSeNet[J] Optics and Precision Engineering, 2023, 31(8): 1217−1227.
[27]   XU Q, MA Y, WU J, et al. Faster BiSeNet: a faster bilateral segmentation network for real-time semantic segmentation [C]// 2021 International Joint Conference on Neural Networks . Shenzhen: IEEE, 2021: 1−8.
[28]   陈智超, 焦海宁, 杨杰, 等 基于改进MobileNet v2的垃圾图像分类算法[J]. 浙江大学学报: 工学版, 2021, 55 (8): 1490- 1499
CHEN Zhichao, JIAO Haining, YANG Jie, et al Garbage image classification algorithm based on improved MobileNet v2[J]. Journal of Zhejiang University: Engineering Science, 2021, 55 (8): 1490- 1499
[1] Shuhan WU,Dan WANG,Yuanfang CHEN,Ziyu JIA,Yueqi ZHANG,Meng XU. Attention-fused filter bank dual-view graph convolution motor imagery EEG classification[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(7): 1326-1335.
[2] Linrui LI,Dongsheng WANG,Hongjie FAN. Fact-based similar case retrieval methods based on statutory knowledge[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(7): 1357-1365.
[3] Xianwei MA,Chaohui FAN,Weizhi NIE,Dong LI,Yiqun ZHU. Robust fault diagnosis method for failure sensors[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(7): 1488-1497.
[4] Jun YANG,Chen ZHANG. Semantic segmentation of 3D point cloud based on boundary point estimation and sparse convolution neural network[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(6): 1121-1132.
[5] Juan SONG,Longxi HE,Huiping LONG. Deep learning-based algorithm for multi defect detection in tunnel lining[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(6): 1161-1173.
[6] Yi LIU,Yidan CHEN,Lin GAO,Jiao HONG. Lightweight road extraction model based on multi-scale feature fusion[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(5): 951-959.
[7] Cuiting WEI,Weijian ZHAO,Bochao SUN,Yunyi LIU. Intelligent rebar inspection based on improved Mask R-CNN and stereo vision[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(5): 1009-1019.
[8] Bo ZHONG,Pengfei WANG,Yiqiao WANG,Xiaoling WANG. Survey of deep learning based EEG data analysis technology[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(5): 879-890.
[9] Kang FAN,Ming’en ZHONG,Jiawei TAN,Zehui ZHAN,Yan FENG. Traffic scene perception algorithm with joint semantic segmentation and depth estimation[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(4): 684-695.
[10] Hai HUAN,Yu SHENG,Chenxi GU. Global guidance multi-feature fusion network based on remote sensing image road extraction[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(4): 696-707.
[11] Xianglong LUO,Yafei WANG,Yanbo WANG,Lixin WANG. Structural deformation prediction of monitoring data based on bi-directional gate board learning system[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(4): 729-736.
[12] Mingjun SONG,Wen YAN,Yizhao DENG,Junran ZHANG,Haiyan TU. Light-weight algorithm for real-time robotic grasp detection[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(3): 599-610.
[13] Qingjie QIAN,Junhe YU,Hongfei ZHAN,Rui WANG,Jian HU. Dimension prediction method of injection molded parts based on multi-feature fusion of DL-BiGRU[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(3): 646-654.
[14] Canlin LI,Wenjiao ZHANG,Zhiwen SHAO,Lizhuang MA,Xinyue WANG. Semantic segmentation method on nighttime road scene based on Trans-nightSeg[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(2): 294-303.
[15] Xinhua YAO,Tao YU,Senwen FENG,Zijian MA,Congcong LUAN,Hongyao SHEN. Recognition method of parts machining features based on graph neural network[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(2): 349-359.