Please wait a minute...
浙江大学学报(工学版)  2025, Vol. 59 Issue (1): 1-17    DOI: 10.3785/j.issn.1008-973X.2025.01.001
计算机与控制工程     
基于深度学习的列车运行环境感知关键算法研究综述
陈智超1,2,3(),杨杰1,2,3,4,*(),李凡1,2,冯志成1,2
1. 江西理工大学 电气工程与自动化学院,江西 赣州 341000
2. 江西理工大学 磁浮轨道交通装备江西省重点实验室,江西 赣州 341000
3. 上海电机学院 电气学院, 上海 201306
4. 国瑞科创稀土功能材料有限公司,江西 赣州 341000
Review on deep learning-based key algorithm for train running environment perception
Zhichao CHEN1,2,3(),Jie YANG1,2,3,4,*(),Fan LI1,2,Zhicheng FENG1,2
1. School of Electrical Engineering and Automation, Jiangxi University of Science and Technology, Ganzhou 341000, China
2. Jiangxi Province Key Laboratory of Maglev Rail Transit Equipment, Jiangxi University of Science and Technology, Ganzhou 341000, China
3. School of Electrical Engineering, Shanghai Dianji University, Shanghai 201306, China
4. Guorui Scientific Innovation Rare Earth Functional Materials Company Limited, Ganzhou 341000, China
 全文: PDF(5370 KB)   HTML
摘要:

阐述深度学习在感知任务中的理论和相关基础,梳理深度学习在视觉、点云处理方面的模型架构及性能. 系统总结基于图像识别的轨道区域提取、接触网异物检测和低照度图像增强等关键算法,归纳现有算法的难点. 针对列车对3D感知的需求,进一步梳理面向铁路场景的点云分割、单目3D检测和多模态融合检测算法,对常见于文献的数据集进行模型性能的对比分析. 总结列车运行环境感知现阶段存在的问题和未来的发展趋势.

关键词: 列车运行环境感知深度学习图像处理三维感知多模态融合    
Abstract:

The theoretical and related foundations of deep learning were elaborated in perceptual tasks, and the model architectures and performance of deep learning in vision and point cloud processing were combed. The key image recognition-based algorithms for track region extraction, contact network foreign object detection and low-light image enhancement were summarized, and the difficulties of existing algorithms were listed. For the demand for 3D perception of trains, the point cloud segmentation, monocular 3D detection and multimodal fusion detection algorithms for railroad scenes were clarified, and the model performance of datasets widely used in literature was analyzed. The problems and the trends for train running environment perception were outlined.

Key words: train running environment perception    deep learning    image processing    3D perception    multimodal fusion
收稿日期: 2024-03-08 出版日期: 2025-01-18
CLC:  TP 242.6  
基金资助: 国家自然科学基金资助项目(62063009);国家重点研发计划资助项目(2023YFB4302100);江西省重大科技研发专项资助项目(20232ACE01011).
通讯作者: 杨杰     E-mail: chenzhichao_ai@163.com;yangjie@jxust.edu.cn
作者简介: 陈智超(1997—),男,博士生,从事列车智能感知与安全防护研究. orcid.org/0000-0002-7150-4914. E-mail:chenzhichao_ai@163.com
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
作者相关文章  
陈智超
杨杰
李凡
冯志成

引用本文:

陈智超,杨杰,李凡,冯志成. 基于深度学习的列车运行环境感知关键算法研究综述[J]. 浙江大学学报(工学版), 2025, 59(1): 1-17.

Zhichao CHEN,Jie YANG,Fan LI,Zhicheng FENG. Review on deep learning-based key algorithm for train running environment perception. Journal of ZheJiang University (Engineering Science), 2025, 59(1): 1-17.

链接本文:

https://www.zjujournals.com/eng/CN/10.3785/j.issn.1008-973X.2025.01.001        https://www.zjujournals.com/eng/CN/Y2025/V59/I1/1

图 1  文章内容的逻辑关系示意图
任务需求关键算法任务分类技术特性
2D感知轨道沿线环境状态感知语义分割全卷积神经网络,CNN与Transformer结合,轻量级结构
多模型联合联合2种任务模型的结果进行异物识别
多任务模型在检测基础上构建额外的语义分割分支
电气化铁路接触网异物检测目标检测YOLO系列的衍生方法,RCNN系列的衍生方法,手工算子结合CNN分类方法
图像生成基于AI大模型进行批量图像生成
列车驾驶低照度图像增强弱光增强伽马变换与对比度调整获取配对训练数据,Retinex理论以及生成对抗网络,光照曲线增强理论
3D感知铁路场景点云分割点云语义分割体素,原始点云,视图投影
铁路三维目标检测单目3D检测顺应2D检测方法的逻辑
激光雷达与多模态3D检测公开可用的多模态数据集,基于BEVFusion进行单模态或多模态检测
表 1  列车运行环境感知任务
图 2  基于编码器-解码器的语义分割架构示意图
图 3  二阶段和单阶段目标检测的架构示意图
图 4  零参考深度曲线估计算法的整体架构[22]
图 5  点云数据处理中的典型数据表示方法
图 6  典型点云分割网络的结构示意图
图 7  体素和柱体的表现形式
图 8  典型相机-激光雷达融合检测网络结构的示意图
融合方法模型mAP/%t/ms
提议级融合CenterFusion[35]32.6
提议级融合TransFusion[42]68.9156.6
提议级融合FUTR3D[43]64.5321.4
点级融合FusionPainting[36]68.1185.8
并行融合BEVFusion[37]75.0119.2
表 2  不同融合方法在NuScenes数据集上的性能对比
图 9  轨道沿线环境状态感知的典型案例
图 10  接触网异物入侵典型案例
算法类别算法主要思路图片数量异物类别
YOLO系列YOLOv4-EDAM[61]基于轻量级网络改进YOLOv4的主干网络,嵌入注意力机制1 232鸟巢、风筝、气球、垃圾
YOLO系列ST2Rep–YOLOX[58]基于Swin Transformer改进YOLOX主干,引入高效算子1 560鸟巢、风筝、气球
YOLO系列DF-YOLO[62]基准模型为YOLOv7-tiny,引入可形变卷积,焦点损失1 942鸟巢、风筝、气球、垃圾
RCNN系列RCNN4SPTL[63]在FasterRCNN的基础上,利用小卷积核优化网络5 000漂浮物、气球、风筝
传统方法结合分类模型Yu等[59]通过二值化和形态学处理提取异物区域,基于CNN分类861鸟巢、气球、风筝、塑料
表 3  异物检测算法的概况
图 11  基于光照曲线参数估计的渐进式低照度图像增强网络
图 12  低照度增强算法处理前后的检测结果
图 13  使用PointNet++的铁路场景分割效果
图 14  FarNet的网络结构[72]
图 15  OSDaR23数据集的数据采集设备[76]
模型模态mAP/%
行人接触网杆信号杆道路车辆止冲挡
BEVFusionC28.760.014.6620.0616.53
+TFC32.290.298.3027.8325.43
BEVFusionL79.9990.3375.6359.5782.26
+TFL85.5690.9981.3265.8585.20
+TF+TA-GTPL86.9490.7280.1067.8485.46
BEVFusionL+C86.7988.8573.3664.8783.83
+TFL+C87.2591.5769.9866.4083.46
表 4  OSDaR23数据集中BEVFusion的多模态检测实验结果
模型模态mAP/%
D < 50 mD∈[50,100) mD∈[100,150) mD∈[150,200] mD > 200 m
BEVFusion相机20.2047.9922.350.000.00
+TF相机24.0347.5821.270.054.86
BEVFusion激光雷达73.9174.2771.0749.7578.40
+TF激光雷达88.2975.0966.8051.0479.73
+TF+TA-GTP激光雷达88.0874.9671.4652.0878.96
BEVFusion相机+激光雷达81.3774.2267.7150.5676.32
+TF相机+激光雷达86.6874.7070.2054.6580.86
表 5  不同检测距离下BEVFusion的平均精度均值
1 王志忠 铁路施工安全管理的桎梏及应对[J]. 中国安全科学学报, 2021, 31 (Suppl.1): 56- 61
WANG Zhizhong Shackles of railway construction safety management and their countermeasures[J]. China Safety Science Journal, 2021, 31 (Suppl.1): 56- 61
2 YANG B, FANG L Automated extraction of 3-D railway tracks from mobile laser scanning point clouds[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2014, 7 (12): 4750- 4761
doi: 10.1109/JSTARS.2014.2312378
3 ZHU L, HYYPPA J The use of airborne and mobile laser scanning for modeling railway environments in 3D[J]. Remote Sensing, 2014, 6 (4): 3075- 3100
doi: 10.3390/rs6043075
4 LECUN Y, BENGIO Y, HINTON G Deep learning[J]. Nature, 2015, 521 (7553): 436- 444
doi: 10.1038/nature14539
5 王泉东, 杨岳, 罗意平, 等 铁路侵限异物检测方法综述[J]. 铁道科学与工程学报, 2019, 16 (12): 3152- 3159
WANG Quandong, YANG Yue, LUO Yiping, et al Review on railway intrusion detection methods[J]. Journal of Railway Science and Engineering, 2019, 16 (12): 3152- 3159
6 RISTIĆ-DURRANT D, FRANKE M, MICHELS K A review of vision-based on-board obstacle detection and distance estimation in railways[J]. Sensors, 2021, 21 (10): 3452
doi: 10.3390/s21103452
7 LONG J, SHELHAMER E, DARRELL T. Fully convolutional networks for semantic segmentation [C]// IEEE Conference on Computer Vision and Pattern Recognition . Boston: IEEE, 2015: 3431–3440.
8 RONNEBERGER O, FISCHER P, BROX T. U-Net: convolutional networks for biomedical image segmentation [C]// Medical Image Computing and Computer-Assisted Intervention . Munich: Springer, 2015: 234–241.
9 CHEN L C, ZHU Y, PAPANDREOU G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation [C]// European Conference on Computer Vision . Munich: Springer, 2018: 833–851.
10 YU C, GAO C, WANG J, et al BiSeNet V2: bilateral network with guided aggregation for real-time semantic segmentation[J]. International Journal of Computer Vision, 2021, 129: 3051- 3068
doi: 10.1007/s11263-021-01515-2
11 LIN T Y, GOYAL P, GIRSHICK R, et al. Focal Loss for dense object detection [C]// 2017 IEEE International Conference on Computer Vision . Venice: IEEE, 2017: 2999–3007.
12 BOCHKOVSKIY A, WANG C Y, LIAO H Y M. YOLOv4: optimal speed and accuracy of object detection [EB/OL]. (2020–04–23)[2024−03−05]. https://arxiv.org/pdf/2004.10934.
13 LIU S, QI L, QIN H, et al. Path aggregation network for instance segmentation [C]// IEEE/CVF Conference on Computer Vision and Pattern Recognition . Salt Lake City: IEEE, 2018: 8759–8768.
14 TAN M, PANG R, LE Q V. EfficientDet: scalable and efficient object detection [C]// IEEE/CVF Conference on Computer Vision and Pattern Recognition . Seattle: IEEE, 2020: 10781–10790.
15 LORE K G, AKINTAYO A, SARKAR S LLNet: a deep autoencoder approach to natural low-light image enhancement[J]. Pattern Recognition, 2017, 61: 650- 662
doi: 10.1016/j.patcog.2016.06.008
16 LV F, LU F, WU J, et al. MBLLEN: low-light image/video enhancement using CNNs [C]// British Machine Vision Conference . Newcastle: [s.n.], 2018: 1–13.
17 WEI C, WANG W, YANG W, et al. Deep Retinex decomposition for low-light enhancement [EB/OL]. (2018−08−14)[2024−03−05]. https://arxiv.org/pdf/1808.04560.
18 ZHANG Y, ZHANG J, GUO X. Kindling the darkness: a practical low-light image enhancer [C]// Proceedings of the 27th ACM International Conference on Multimedia . [S.l.]: ACM, 2019: 1632–1640.
19 JIANG Y, GONG X, LIU D, et al EnlightenGAN: deep light enhancement without paired supervision[J]. IEEE Transactions on Image Processing, 2021, 30: 2340- 2349
doi: 10.1109/TIP.2021.3051462
20 ZHANG L, ZHANG L, LIU X, et al. Zero-shot restoration of back-lit images using deep internal learning [C]// Proceedings of the 27th ACM International Conference on Multimedia . [S.l.]: ACM, 2019: 1623–1631.
21 ZHU A, ZHANG L, SHEN Y, et al. Zero-shot restoration of underexposed images via robust Retinex decomposition [C]// 2020 IEEE International Conference on Multimedia and Expo . London: IEEE, 2020: 1–6.
22 GUO C, LI C, GUO J, et al. Zero-reference deep curve estimation for low-light image enhancement [C]// IEEE/CVF Conference on Computer Vision and Pattern Recognition . Seattle: IEEE, 2020: 1780–1789.
23 QI C R, SU H, MO K, et al. PointNet: deep learning on point sets for 3D classification and segmentation [C]// IEEE Conference on Computer Vision and Pattern Recognition . Honolulu: IEEE, 2017: 652–660.
24 WU B, WAN A, YUE X, et al. SqueezeSeg: convolutional neural nets with recurrent CRF for real-time road-object segmentation from 3D LiDAR point cloud [C]// IEEE International Conference on Robotics and Automation . Brisbane: IEEE, 2018: 1887–1893.
25 ZHANG Y, ZHOU Z, DAVID P, et al. PolarNet: an improved grid representation for online LiDAR point clouds semantic segmentation [C]// IEEE/CVF Conference on Computer Vision and Pattern Recognition . Seattle: IEEE, 2020: 9601–9610.
26 ZHOU Y, TUZEL O. VoxelNet: end-to-end learning for point cloud based 3D object detection [C]// IEEE/CVF Conference on Computer Vision and Pattern Recognition . Salt Lake City: IEEE, 2018: 4490–4499.
27 YAN Y, MAO Y, LI B SECOND: sparsely embedded convolutional detection[J]. Sensors, 2018, 18 (10): 3337
doi: 10.3390/s18103337
28 LANG A H, VORA S, CAESAR H, et al. PointPillars: fast encoders for object detection from point clouds [C]// IEEE/CVF Conference on Computer Vision and Pattern Recognition . Long Beach: IEEE, 2019: 12697–12705.
29 SHI S, WANG X, LI H. PointRCNN: 3D object proposal generation and detection from point cloud [C]// IEEE/CVF Conference on Computer Vision and Pattern Recognition . Long Beach: IEEE, 2019: 770–779.
30 SHI S, WANG Z, WANG X, et al. Part-A2 net: 3D part-aware and aggregation neural network for object detection from point cloud [EB/OL]. (2020−03−16)[2024−03−05]. https://arxiv.org/pdf/1907.03670v1.
31 ZHANG Y, ZHANG Q, ZHU Z, et al GLENet: boosting 3D object detectors with generative label uncertainty estimation[J]. International Journal of Computer Vision, 2023, 131: 3332- 3352
doi: 10.1007/s11263-023-01869-9
32 SHI S, GUO C, JIANG L, et al. PV-RCNN: point-voxel feature set abstraction for 3D object detection [C]// IEEE/CVF Conference on Computer Vision and Pattern Recognition . Seattle: IEEE, 2020: 10529–10538.
33 PAN X, XIA Z, SONG S, et al. 3D object detection with pointformer [C]// IEEE/CVF Conference on Computer Vision and Pattern Recognition . Nashville: IEEE, 2021: 7463–7472.
34 SHENG H, CAI S, LIU Y, et al. Improving 3D object detection with channel-wise Transformer [C]// IEEE/CVF International Conference on Computer Vision . Montreal: IEEE, 2021: 2743–2752.
35 NABATI R, QI H. CenterFusion: center-based radar and camera fusion for 3D object detection [C]// IEEE Winter Conference on Applications of Computer Vision . Waikoloa: IEEE, 2021: 1527–1536.
36 VORA S, LANG A H, HELOU B, et al. PointPainting: sequential fusion for 3D object detection [C]// IEEE/CVF Conference on Computer Vision and Pattern Recognition . Seattle: IEEE, 2020: 4604–4612.
37 LIU Z, TANG H, AMINI A, et al. BEVFusion: multi-task multi-sensor fusion with unified bird’s-eye view representation [C]// IEEE International Conference on Robotics and Automation . London: IEEE, 2023: 2774–2781.
38 CAESAR H, BANKITI V, LANG A H, et al. NuScenes: a multimodal dataset for autonomous driving [C]// IEEE/CVF Conference on Computer Vision and Pattern Recognition . Seattle: IEEE, 2020: 11621–11631.
39 刘朝辉, 杨杰, 陈智超 基于深度学习的轨道表面异物识别方法[J]. 中国铁道科学, 2023, 44 (3): 23- 33
LIU Zhaohui, YANG Jie, CHEN Zhichao Foreign object recognition method for track surface based on deep learning[J]. China Railway Science, 2023, 44 (3): 23- 33
doi: 10.3969/j.issn.1001-4632.2023.03.03
40 何文玉, 杨杰, 张天露 基于深度学习的轨道异物入侵检测算法[J]. 计算机工程与设计, 2020, 41 (12): 3376- 3383
HE Wenyu, YANG Jie, ZHANG Tianlu Orbital foreign object intrusion detection algorithm based on deep learning[J]. Computer Engineering and Design, 2020, 41 (12): 3376- 3383
41 HE D, ZOU Z, CHEN Y, et al Obstacle detection of rail transit based on deep learning[J]. Measurement, 2021, 176: 109241
doi: 10.1016/j.measurement.2021.109241
42 BAI X, HU Z, ZHU X, et al. TransFusion: robust LiDAR-camera fusion for 3D object detection with transformers [C]// IEEE/CVF Conference on Computer Vision and Pattern Recognition . New Orleans: IEEE, 2022: 1090–1099.
43 CHEN X, ZHANG T, WANG Y, et al. FUTR3D: a unified sensor fusion framework for 3D detection [C]// IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops . Vancouver: IEEE, 2023: 172–181.
44 TONG L, WANG Z, JIA L, et al Fully decoupled residual ConvNet for real-time railway scene parsing of UAV aerial images[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23 (9): 14806- 14819
doi: 10.1109/TITS.2021.3134318
45 KIM B, KIM I, KIM N, et al SeMA-UNet: a semi-supervised learning with multimodal approach of UNet for effective segmentation of key components in railway images[J]. Journal of Electrical Engineering and Technology, 2024, 19: 3317- 3330
doi: 10.1007/s42835-024-01867-y
46 WU Y, MENG F, QIN Y, et al UAV imagery based potential safety hazard evaluation for high-speed railroad using real-time instance segmentation[J]. Advanced Engineering Informatics, 2023, 55: 101819
doi: 10.1016/j.aei.2022.101819
47 WU Y, CHEN P, QIN Y, et al Automatic railroad track components inspection using hybrid deep learning framework[J]. IEEE Transactions on Instrumentation and Measurement, 2023, 72: 5011415
48 CHEN Z, YANG J, ZHOU F RailSegVITNet: a lightweight VIT-based real-time track surface segmentation network for improving railroad safety[J]. Journal of King Saud University-Computer and Information Sciences, 2024, 36 (1): 101929
doi: 10.1016/j.jksuci.2024.101929
49 CHEN Z, YANG J, CHEN L, et al Efficient railway track region segmentation algorithm based on lightweight neural network and cross-fusion decoder[J]. Automation in Construction, 2023, 155: 105069
doi: 10.1016/j.autcon.2023.105069
50 BRUCKER M, CRAMARIUC A, VON EINEM C, et al. Local and global information in obstacle detection on railway tracks [C]// IEEE/RSJ International Conference on Intelligent Robots and Systems . Detroit: IEEE, 2023: 9049–9056.
51 FENG Z, YANG J, CHEN Z, et al LRseg: an efficient railway region extraction method based on lightweight encoder and self-correcting decoder[J]. Expert Systems with Applications, 2024, 238: 122386
doi: 10.1016/j.eswa.2023.122386
52 于新善, 孟祥印, 金腾飞, 等 基于改进Canny算法的物体边缘检测算法[J]. 激光与光电子学进展, 2023, 60 (22): 2212002
YU Xinshan, MENG Xiangyin, JIN Tengfei, et al Object edge detection algorithm based on improved Canny algorithm[J]. Laser and Optoelectronics Progress, 2023, 60 (22): 2212002
doi: 10.3788/LOP223400
53 LIU W, WANG L Quantum image edge detection based on eight-direction Sobel operator for NEQR[J]. Quantum Information Processing, 2022, 21: 190
doi: 10.1007/s11128-022-03527-4
54 耿庆华, 刘伟铭, 刘瑞康 基于空间尺度标准化的动车组底部异常检测[J]. 铁道学报, 2022, 44 (5): 67- 75
GENG Qinghua, LIU Weiming, LIU Ruikang Anomaly detection of bottom of EMU based on space-scale standardization[J]. Journal of the China Railway Society, 2022, 44 (5): 67- 75
doi: 10.3969/j.issn.1001-8360.2022.05.009
55 王世勇, 乾国康, 李迪, 等 面向边缘特征的实时模板匹配方法[J]. 华南理工大学学报: 自然科学版, 2023, 51 (9): 1- 10
WANG Shiyong, QIAN Guokang, LI Di, et al Real-time template matching method for edge features[J]. Journal of South China University of Technology: Natural Science Edition, 2023, 51 (9): 1- 10
56 CHEN C, YANG B, SONG S, et al Automatic clearance anomaly detection for transmission line corridors utilizing UAV-borne LiDAR data[J]. Remote Sensing, 2018, 10 (4): 613
doi: 10.3390/rs10040613
57 LI H, DONG Y, LIU Y, et al Design and implementation of UAVs for bird’s nest inspection on transmission lines based on deep learning[J]. Drones, 2022, 6 (9): 252
doi: 10.3390/drones6090252
58 TANG C, DONG H, HUANG Y, et al Foreign object detection for transmission lines based on Swin Transformer V2 and YOLOX[J]. The Visual Computer, 2024, 40: 3003- 3021
doi: 10.1007/s00371-023-03004-8
59 YU Y, QIU Z, LIAO H, et al A method based on multi-network feature fusion and random forest for foreign objects detection on transmission lines[J]. Applied Sciences, 2022, 12 (10): 4982
doi: 10.3390/app12104982
60 CHEN Z, YANG J, FENG Z, et al RailFOD23: a dataset for foreign object detection on railroad transmission lines[J]. Scientific Data, 2024, 11: 72
doi: 10.1038/s41597-024-02918-9
61 QIU Z, ZHU X, LIAO C, et al A lightweight YOLOv4-EDAM model for accurate and real-time detection of foreign objects suspended on power lines[J]. IEEE Transactions on Power Delivery, 2022, 38 (2): 1329- 1340
62 LI S, LIU Y, LI M, et al DF-YOLO: highly accurate transmission line foreign object detection algorithm[J]. IEEE Access, 2023, 11: 108398- 108406
doi: 10.1109/ACCESS.2023.3321385
63 ZHANG W, LIU X, YUAN J, et al RCNN-based foreign object detection for securing power transmission lines (RCNN4SPTL)[J]. Procedia Computer Science, 2019, 147: 331- 337
doi: 10.1016/j.procs.2019.01.232
64 LI G, YANG Y, QU X, et al A deep learning based image enhancement approach for autonomous driving at night[J]. Knowledge-Based Systems, 2021, 213: 106617
doi: 10.1016/j.knosys.2020.106617
65 刘文强. 基于深度学习的接触网支持装置状态检测方法研究[D]. 成都: 西南交通大学, 2021.
LIU Wenqiang. Study on deep learning-based state detection method study for catenary support devices [D]. Chengdu: Southwest Jiaotong University, 2021.
66 CHEN Z, YANG J, YANG C BrightsightNet: a lightweight progressive low-light image enhancement network and its application in “Rainbow” maglev train[J]. Journal of King Saud University-Computer and Information Sciences, 2023, 35 (10): 101814
doi: 10.1016/j.jksuci.2023.101814
67 LIN S, XU C, CHEN L, et al LiDAR point cloud recognition of overhead catenary system with deep learning[J]. Sensors, 2020, 20 (8): 2212
doi: 10.3390/s20082212
68 YU X, HE W, QIAN X, et al Real-time rail recognition based on 3D point clouds[J]. Measurement Science and Technology, 2022, 33 (10): 105207
doi: 10.1088/1361-6501/ac750c
69 DIBARI P, NITTI M, MAGLIETTA R, et al. Semantic segmentation of multimodal point clouds from the railway context [C]// Multimodal Sensing and Artificial Intelligence: Technologies and Applications II . Washington: SPIE, 2021, 11785: 158–166.
70 GRANDIO J, RIVEIRO B, SOILÁN M, et al Point cloud semantic segmentation of complex railway environments using deep learning[J]. Automation in Construction, 2022, 141: 104425
doi: 10.1016/j.autcon.2022.104425
71 SOILÁN M, NÓVOA A, SÁNCHEZ-RODRÍGUEZ A, et al Semantic segmentation of point clouds with PointNet and KPConv architectures applied to railway tunnels[J]. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2020, 2: 281- 288
72 WANG Z, YU G, CHEN P, et al FarNet: an attention-aggregation network for long-range rail track point cloud segmentation[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23 (8): 13118- 13126
doi: 10.1109/TITS.2021.3119900
73 LIU P, YU G, WANG Z, et al Uncertainty-aware point-cloud semantic segmentation for unstructured roads[J]. IEEE Sensors Journal, 2023, 23 (13): 15071- 15080
doi: 10.1109/JSEN.2023.3266802
74 MAURI A, KHEMMAR R, DECOUX B, et al Real-time 3D multi-object detection and localization based on deep learning for road and railway smart mobility[J]. Journal of Imaging, 2021, 7 (8): 145
doi: 10.3390/jimaging7080145
75 MAURI A, KHEMMAR R, DECOUX B, et al Lightweight convolutional neural network for real-time 3D object detection in road and railway environments[J]. Journal of Real-Time Image Processing, 2022, 19: 499- 516
doi: 10.1007/s11554-022-01202-6
76 TAGIEW R, KLASEK P, TILLY R, et al. OSDaR23: open sensor data for rail 2023 [C]// International Conference on Robotics and Automation Engineering . Singapore: IEEE, 2023: 270–276.
77 KOPUZ E. Multi-modal 3D object detection in long range and low-resolution conditions of sensors [D]. Munich: Technical University of Munich, 2023.
78 WU Y, QIN Y, QIAN Y, et al Automatic detection of arbitrarily oriented fastener defect in high-speed railway[J]. Automation in Construction, 2021, 131: 103913
doi: 10.1016/j.autcon.2021.103913
[1] 李凡,杨杰,冯志成,陈智超,付云骁. 基于图像识别的弓网接触点检测方法[J]. 浙江大学学报(工学版), 2024, 58(9): 1801-1810.
[2] 肖力,曹志刚,卢浩冉,黄志坚,蔡袁强. 基于深度学习和梯度优化的弹性超材料设计[J]. 浙江大学学报(工学版), 2024, 58(9): 1892-1901.
[3] 吴书晗,王丹,陈远方,贾子钰,张越棋,许萌. 融合注意力的滤波器组双视图图卷积运动想象脑电分类[J]. 浙江大学学报(工学版), 2024, 58(7): 1326-1335.
[4] 李林睿,王东升,范红杰. 基于法条知识的事理型类案检索方法[J]. 浙江大学学报(工学版), 2024, 58(7): 1357-1365.
[5] 马现伟,范朝辉,聂为之,李东,朱逸群. 对失效传感器具备鲁棒性的故障诊断方法[J]. 浙江大学学报(工学版), 2024, 58(7): 1488-1497.
[6] 宋娟,贺龙喜,龙会平. 基于深度学习的隧道衬砌多病害检测算法[J]. 浙江大学学报(工学版), 2024, 58(6): 1161-1173.
[7] 钟博,王鹏飞,王乙乔,王晓玲. 基于深度学习的EEG数据分析技术综述[J]. 浙江大学学报(工学版), 2024, 58(5): 879-890.
[8] 魏翠婷,赵唯坚,孙博超,刘芸怡. 基于改进Mask R-CNN与双目视觉的智能配筋检测[J]. 浙江大学学报(工学版), 2024, 58(5): 1009-1019.
[9] 罗向龙,王亚飞,王彦博,王立新. 基于双向门控式宽度学习系统的监测数据结构变形预测[J]. 浙江大学学报(工学版), 2024, 58(4): 729-736.
[10] 宦海,盛宇,顾晨曦. 基于遥感图像道路提取的全局指导多特征融合网络[J]. 浙江大学学报(工学版), 2024, 58(4): 696-707.
[11] 钱庆杰,余军合,战洪飞,王瑞,胡健. 基于DL-BiGRU多特征融合的注塑件尺寸预测方法[J]. 浙江大学学报(工学版), 2024, 58(3): 646-654.
[12] 宋明俊,严文,邓益昭,张俊然,涂海燕. 轻量化机器人抓取位姿实时检测算法[J]. 浙江大学学报(工学版), 2024, 58(3): 599-610.
[13] 姚鑫骅,于涛,封森文,马梓健,栾丛丛,沈洪垚. 基于图神经网络的零件机加工特征识别方法[J]. 浙江大学学报(工学版), 2024, 58(2): 349-359.
[14] 孟月波,王博,刘光辉. 多尺度上下文引导特征消除的古塔图像分类[J]. 浙江大学学报(工学版), 2024, 58(12): 2489-2499.
[15] 周逸凡,张灵维,周正东,蔡智,袁梦瑶,袁晓曦,杨泽毅. 基于注意力机制和深度学习的群体语言想象脑电信号分类[J]. 浙江大学学报(工学版), 2024, 58(12): 2540-2546.