Please wait a minute...
浙江大学学报(工学版)  2025, Vol. 59 Issue (12): 2585-2592    DOI: 10.3785/j.issn.1008-973X.2025.12.013
交通工程、土木工程     
面向动态交通流量预测的自适应图注意Transformer
刘宇轩1,2(),刘毅志1,2,*(),廖祝华1,2,邹正标1,2,汤璟昕1,2
1. 湖南科技大学 计算机科学与工程学院,湖南 湘潭 411201
2. 湖南科技大学 服务计算与软件服务新技术湖南省重点实验室,湖南 湘潭 411201
Adaptive graph attention Transformer for dynamic traffic flow prediction
Yuxuan LIU1,2(),Yizhi LIU1,2,*(),Zhuhua LIAO1,2,Zhengbiao ZOU1,2,Jingxin TANG1,2
1. School of Computer Science and Engineering, Hunan University of Science and Technology, Xiangtan 411201, China
2. Hunan Key Laboratory for Service Computing and Novel Software Technology, Hunan University of Science and Technology, Xiangtan 411201, China
 全文: PDF(831 KB)   HTML
摘要:

针对现有基于图神经网络和注意力机制的交通流预测模型在处理复杂时空相关性、克服预定义图结构局限性以及捕捉周期性特征方面的不足,提出多时间尺度自适应图注意Transformer(MSAGAFormer)模型. 该模型将短期、中期和长期历史交通数据各自细分为低、中、高3种不同时间尺度的数据序列,并采用压缩机制以降低冗余信息、提升时序特征的表达效率. 通过设计时空嵌入方法对节点位置与时间特征进行编码,强化模型对时空数据的理解. 空间层采用基于GAT的多头注意力机制以建模动态空间相关性,时间层引入多尺度时间注意力结构以捕获不同时间粒度下的动态变化特征. 在PEMS数据集上的实验结果显示,MSAGAFormer在预测精度上优于目前较为先进的Trendformer、ATST-GCN、STTN等模型.

关键词: 智能交通系统交通流量预测Transformer自适应图多时间尺度    
Abstract:

Existing traffic flow prediction models based on graph neural networks and attention mechanisms have shortcomings in capturing complex spatiotemporal dependencies, overcoming the constraints of predefined graph structures, and modeling periodic patterns. Thus, a multi-scale adaptive graph attention Transformer (MSAGAFormer) was proposed. Short-, medium-, and long-term historical traffic data were divided into low-, medium-, and high-scale temporal sequences, and a compression mechanism was employed to reduce redundant information and enhance the efficiency of temporal feature representation. A spatiotemporal embedding method was designed to encode node positions and temporal attributes, thereby strengthening the model’s capability to interpret spatiotemporal data. A GAT-based multi-head attention mechanism was utilized in the spatial layer to model dynamic spatial correlations, while a multi-scale temporal attention structure was incorporated in the temporal layer to capture dynamic variations across different temporal granularities. Experimental results on the PEMS datasets demonstrated that MSAGAFormer outperformed state-of-the-art models such as Trendformer, ATST-GCN, and STTN in prediction accuracy.

Key words: intelligent transportation system    traffic flow prediction    Transformer    adaptive graph    multi-time scale
收稿日期: 2024-10-29 出版日期: 2025-11-25
CLC:  TP 393  
基金资助: 教育部人文社会科学研究规划基金资助项目(24YJAZH237);湖南省重点研发计划资助项目 (2023SK2081); 湖南省自然科学基金资助项目(2024JJ5163);湖南省教育厅科学研究重点资助项目(22A0341).
通讯作者: 刘毅志     E-mail: lyx_research@sina.cn;yizhi_liu@sina.cn
作者简介: 刘宇轩(1998—),男,硕士生,从事轨迹数据挖掘研究. orcid.org/0009-0008-6969-8036. E-mail:lyx_research@sina.cn
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
作者相关文章  
刘宇轩
刘毅志
廖祝华
邹正标
汤璟昕

引用本文:

刘宇轩,刘毅志,廖祝华,邹正标,汤璟昕. 面向动态交通流量预测的自适应图注意Transformer[J]. 浙江大学学报(工学版), 2025, 59(12): 2585-2592.

Yuxuan LIU,Yizhi LIU,Zhuhua LIAO,Zhengbiao ZOU,Jingxin TANG. Adaptive graph attention Transformer for dynamic traffic flow prediction. Journal of ZheJiang University (Engineering Science), 2025, 59(12): 2585-2592.

链接本文:

https://www.zjujournals.com/eng/CN/10.3785/j.issn.1008-973X.2025.12.013        https://www.zjujournals.com/eng/CN/Y2025/V59/I12/2585

图 1  MSAGAFormer架构图
图 2  时空聚合模块
图 3  时空门控卷积模块
数据集nsto/min时间戳时间范围
PEMS043075169922018/01/01—2018/02/28
PEMS081705178562018/07/01—2018/08/31
表 1  数据集描述
模型Tp=15 minTp=30 minTp=45 minTp=60 min
RMSEMAERMSEMAERMSEMAERMSEMAE
STGCN30.4919.9833.2321.4636.8724.4839.4126.93
T-GCN29.3920.3232.7921.9436.2724.0639.8827.27
DCRNN28.6519.0632.7222.0935.7423.8941.1928.51
DMSTGCN28.0218.8131.4621.0234.3822.6438.6726.15
Trendformer27.4718.6230.5920.5633.5322.3234.1723.54
STTN26.9517.9829.7619.4831.2821.7633.3522.43
ATST-GCN27.3218.4530.1920.0730.2421.0332.4922.17
MSAGAFormer25.7816.3327.5117.4227.1317.9428.8918.76
表 2  PEMS04数据集上不同模型的RMSE和MAE对比结果
模型Tp=15 minTp=30 minTp=45 minTp=60 min
RMSEMAERMSEMAERMSEMAERMSEMAE
STGCN26.9819.7729.4721.5232.8724.1434.5925.97
T-GCN25.3420.2228.7422.4133.0525.3235.1527.64
DCRNN24.4617.9327.5920.3731.4923.7832.9225.13
DMSTGCN24.0117.5826.8919.8429.3422.0831.7324.68
Trendformer23.7117.3326.1519.5428.8421.8530.5824.06
STTN22.7816.9525.4819.1627.3820.9229.0422.87
ATST-GCN23.5817.2625.5819.2827.1620.7928.7822.67
MSAGAFormer21.8716.0523.8717.2625.1317.7626.6319.14
表 3  PEMS08数据集上不同模型的RMSE和MAE对比结果
模型Tp=15 minTp=30 minTp=45 minTp=60 min
RMSEMAERMSEMAERMSEMAERMSEMAE
w/o MH26.7618.1230.1420.3436.2523.0536.1726.77
w/o LH28.6719.7831.6621.3129.4319.3230.4319.71
w/o LM33.3227.0328.3218.7629.1219.4535.4125.88
w/o adp27.0117.6828.6318.5929.8919.6930.8320.89
w/o dis27.3818.1228.9618.9630.1220.4331.2521.54
MSAGAFormer25.7816.3327.5117.4227.1317.9428.8918.76
表 4  PEMS04数据集上不同变体的RMSE和MAE对比结果
模型Tp=15 minTp=30 minTp=45 minTp=60 min
RMSEMAERMSEMAERMSEMAERMSEMAE
w/o MH22.9616.7825.6119.3229.2520.4534.7326.33
w/o LH24.0817.5824.2618.0627.1319.0831.9322.14
w/o LM29.7220.4726.3419.8628.7119.4327.6520.42
w/o adp23.7316.9724.5818.7527.1619.1430.6920.04
w/o dis27.3818.1228.9618.9630.1220.4331.2521.54
MSAGAFormer21.8716.0523.8717.2625.1317.7626.6319.14
表 5  PEMS08数据集上不同变体的RMSE和MAE对比结果
图 4  测试集预测可视化示例
1 NAHELIYA B, REDHU P, KUMAR K A review on developments in evolutionary computation approaches for road traffic flow prediction[J]. Archives of Computational Methods in Engineering, 2025, 32 (3): 1499- 1523
doi: 10.1007/s11831-024-10189-1
2 GOMES B, COELHO J, AIDOS H A survey on traffic flow prediction and classification[J]. Intelligent Systems with Applications, 2023, 20: 200268
doi: 10.1016/j.iswa.2023.200268
3 ZHANG S, TONG H, XU J, et al Graph convolutional networks: a comprehensive review[J]. Computational Social Networks, 2019, 6 (1): 11
doi: 10.1186/s40649-019-0069-y
4 SUN C, LI C, LIN X, et al Attention-based graph neural networks: a survey[J]. Artificial Intelligence Review, 2023, 56 (2): 2263- 2310
5 LI Y, YU R, SHAHABI C, et al. Diffusion convolutional recurrent neural network: data-driven traffic forecasting [C]// 6th International Conference on Learning Representations. Vancouver: MIT Press, 2018: 1−16.
6 ZHAO L, SONG Y, ZHANG C, et al T-GCN: a temporal graph convolutional network for traffic prediction[J]. IEEE Transactions on Intelligent Transportation Systems, 2020, 21 (9): 3848- 3858
doi: 10.1109/TITS.2019.2935152
7 YU B, YIN H, ZHU Z. Spatio-temporal graph convolutional networks: a deep learning framework for traffic forecasting [C]// 27th International Joint Conference on Artificial Intelligence. Stockholm: International Joint Conferences on Artificial Intelligence Organization, 2018: 3634−3640.
8 ZUO J, ZEITOUNI K, TAHER Y, et al Graph convolutional networks for traffic forecasting with missing values[J]. Data Mining and Knowledge Discovery, 2023, 37 (2): 913- 947
doi: 10.1007/s10618-022-00903-7
9 SHIN Y, YOON Y PGCN: progressive graph convolutional networks for spatial–temporal traffic forecasting[J]. IEEE Transactions on Intelligent Transportation Systems, 2024, 25 (7): 7633- 7644
doi: 10.1109/TITS.2024.3349565
10 YU W, HUANG X, QIU Y, et al GSTC-Unet: a U-shaped multi-scaled spatiotemporal graph convolutional network with channel self-attention mechanism for traffic flow forecasting[J]. Expert Systems with Applications, 2023, 232: 120724
doi: 10.1016/j.eswa.2023.120724
11 XU M, DAI W, LIU C, et al. Spatial-temporal transformer networks for traffic flow forecasting [EB/OL]. (2021−05−29). https://arxiv.org/abs/2001.02908.pdf.
12 JIANG J, HAN C, ZHAO W X, et al PDFormer: propagation delay-aware dynamic long-range transformer for traffic flow prediction[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2023, 37 (4): 4365- 4373
doi: 10.1609/aaai.v37i4.25556
13 WU H, XU J, WANG J, et al Autoformer: decomposition transformers with auto-correlation for long-term series forecasting[J]. Advances in Neural Information Processing Systems, 2021, 34: 22419- 22430
14 BAI J, ZHU J, SONG Y, et al A3T-GCN: attention temporal graph convolutional network for traffic forecasting[J]. ISPRS International Journal of Geo-Information, 2021, 10 (7): 485
doi: 10.3390/ijgi10070485
15 REN Q, LI Y, LIU Y Transformer-enhanced periodic temporal convolution network for long short-term traffic flow forecasting[J]. Expert Systems with Applications, 2023, 227: 120203
doi: 10.1016/j.eswa.2023.120203
16 GAO M, DU Z, QIN H, et al Dynamic multi-scale spatial-temporal graph convolutional network for traffic flow prediction[J]. Knowledge-Based Systems, 2024, 305: 112586
doi: 10.1016/j.knosys.2024.112586
17 CAI W, LIANG Y, LIU X, et al MSGNet: learning multi-scale inter-series correlations for multivariate time series forecasting[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2024, 38 (10): 11141- 11149
doi: 10.1609/aaai.v38i10.28991
18 LUO Q, HE S, HAN X, et al LSTTN: a long-short term transformer-based spatiotemporal neural network for traffic flow forecasting[J]. Knowledge-Based Systems, 2024, 293: 111637
doi: 10.1016/j.knosys.2024.111637
19 WANG B, WANG J ST-MGAT: spatio-temporal multi-head graph attention network for Traffic prediction[J]. Physica A: Statistical Mechanics and its Applications, 2022, 603: 127762
doi: 10.1016/j.physa.2022.127762
20 汪鸣, 彭舰, 黄飞虎 基于多时间尺度时空图网络的交通流量预测模型[J]. 计算机科学, 2022, 49 (8): 40- 48
WANG Ming, PENG Jian, HUANG Feihu Multi-time scale spatial-temporal graph neural network for traffic flow prediction[J]. Computer Science, 2022, 49 (8): 40- 48
21 HAN L, DU B, SUN L, et al. Dynamic and multi-faceted spatio-temporal deep learning for traffic speed forecasting [C]// 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. [S. l. ]: ACM, 2021: 547−555.
22 HUANG L, ZHU F, LI Z. Trendformer: trend adaptive transformer for traffic flow prediction [C]// 5th International Conference on Data Science and Information Technology. Shanghai: IEEE, 2022: 1–6.
23 邹正标, 刘毅志, 廖祝华, 等 动态交通流量预测的时空注意力图卷积网络[J]. 山东大学学报: 工学版, 2024, 54 (5): 50- 61
ZOU Zhengbiao, LIU Yizhi, LIAO Zhuhua, et al Attention-based spatio-temporal graph convolutional network for dynamic traffic flow prediction[J]. Journal of Shandong University: Engineering Science, 2024, 54 (5): 50- 61
[1] 孟璇,张雪英,孙颖,周雅茹. 基于电极排列和Transformer的脑电情感识别[J]. 浙江大学学报(工学版), 2025, 59(9): 1872-1880.
[2] 刘杰,吴优,田佳禾,韩轲. 改进Transformer的肺部CT图像超分辨率重建[J]. 浙江大学学报(工学版), 2025, 59(7): 1434-1442.
[3] 蔡永青,韩成,权巍,陈兀迪. 基于注意力机制的视觉诱导晕动症评估模型[J]. 浙江大学学报(工学版), 2025, 59(6): 1110-1118.
[4] 张梦瑶,周杰,李文婷,赵勇. 结合全局信息和局部信息的三维网格分割框架[J]. 浙江大学学报(工学版), 2025, 59(5): 912-919.
[5] 张德军,白燕子,曹锋,吴亦奇,徐战亚. 面向密集预测任务的点云Transformer适配器[J]. 浙江大学学报(工学版), 2025, 59(5): 920-928.
[6] 马莉,王永顺,胡瑶,范磊. 预训练长短时空交错Transformer在交通流预测中的应用[J]. 浙江大学学报(工学版), 2025, 59(4): 669-678.
[7] 张振利,胡新凯,李凡,冯志成,陈智超. 基于CNN和Efficient Transformer的多尺度遥感图像语义分割算法[J]. 浙江大学学报(工学版), 2025, 59(4): 778-786.
[8] 贾晓芬,王子祥,赵佰亭,粱镇洹,胡锐. 双维度交叉融合驱动的图像超分辨率重建方法[J]. 浙江大学学报(工学版), 2025, 59(12): 2516-2526.
[9] 杨燕,贾存鹏. 代理注意力下域特征交互的高效图像去雾算法[J]. 浙江大学学报(工学版), 2025, 59(12): 2527-2538.
[10] 杨冰,徐楚阳,姚金良,向学勤. 基于单目RGB图像的三维手部姿态估计方法[J]. 浙江大学学报(工学版), 2025, 59(1): 18-26.
[11] 李劲业,李永强. 融合知识图谱的时空多图卷积交通流量预测[J]. 浙江大学学报(工学版), 2024, 58(7): 1366-1376.
[12] 马现伟,范朝辉,聂为之,李东,朱逸群. 对失效传感器具备鲁棒性的故障诊断方法[J]. 浙江大学学报(工学版), 2024, 58(7): 1488-1497.
[13] 范康,钟铭恩,谭佳威,詹泽辉,冯妍. 联合语义分割和深度估计的交通场景感知算法[J]. 浙江大学学报(工学版), 2024, 58(4): 684-695.
[14] 温绍杰,吴瑞刚,冯超文,刘英莉. 基于Transformer的多模态级联文档布局分析网络[J]. 浙江大学学报(工学版), 2024, 58(2): 317-324.
[15] 熊昌镇,郭传玺,王聪. 基于动态位置编码和注意力增强的目标跟踪算法[J]. 浙江大学学报(工学版), 2024, 58(12): 2427-2437.