Please wait a minute...
Journal of ZheJiang University (Engineering Science)  2025, Vol. 59 Issue (12): 2585-2592    DOI: 10.3785/j.issn.1008-973X.2025.12.013
    
Adaptive graph attention Transformer for dynamic traffic flow prediction
Yuxuan LIU1,2(),Yizhi LIU1,2,*(),Zhuhua LIAO1,2,Zhengbiao ZOU1,2,Jingxin TANG1,2
1. School of Computer Science and Engineering, Hunan University of Science and Technology, Xiangtan 411201, China
2. Hunan Key Laboratory for Service Computing and Novel Software Technology, Hunan University of Science and Technology, Xiangtan 411201, China
Download: HTML     PDF(831KB) HTML
Export: BibTeX | EndNote (RIS)      

Abstract  

Existing traffic flow prediction models based on graph neural networks and attention mechanisms have shortcomings in capturing complex spatiotemporal dependencies, overcoming the constraints of predefined graph structures, and modeling periodic patterns. Thus, a multi-scale adaptive graph attention Transformer (MSAGAFormer) was proposed. Short-, medium-, and long-term historical traffic data were divided into low-, medium-, and high-scale temporal sequences, and a compression mechanism was employed to reduce redundant information and enhance the efficiency of temporal feature representation. A spatiotemporal embedding method was designed to encode node positions and temporal attributes, thereby strengthening the model’s capability to interpret spatiotemporal data. A GAT-based multi-head attention mechanism was utilized in the spatial layer to model dynamic spatial correlations, while a multi-scale temporal attention structure was incorporated in the temporal layer to capture dynamic variations across different temporal granularities. Experimental results on the PEMS datasets demonstrated that MSAGAFormer outperformed state-of-the-art models such as Trendformer, ATST-GCN, and STTN in prediction accuracy.



Key wordsintelligent transportation system      traffic flow prediction      Transformer      adaptive graph      multi-time scale     
Received: 29 October 2024      Published: 25 November 2025
CLC:  TP 393  
Fund:  教育部人文社会科学研究规划基金资助项目(24YJAZH237);湖南省重点研发计划资助项目 (2023SK2081); 湖南省自然科学基金资助项目(2024JJ5163);湖南省教育厅科学研究重点资助项目(22A0341).
Corresponding Authors: Yizhi LIU     E-mail: lyx_research@sina.cn;yizhi_liu@sina.cn
Cite this article:

Yuxuan LIU,Yizhi LIU,Zhuhua LIAO,Zhengbiao ZOU,Jingxin TANG. Adaptive graph attention Transformer for dynamic traffic flow prediction. Journal of ZheJiang University (Engineering Science), 2025, 59(12): 2585-2592.

URL:

https://www.zjujournals.com/eng/10.3785/j.issn.1008-973X.2025.12.013     OR     https://www.zjujournals.com/eng/Y2025/V59/I12/2585


面向动态交通流量预测的自适应图注意Transformer

针对现有基于图神经网络和注意力机制的交通流预测模型在处理复杂时空相关性、克服预定义图结构局限性以及捕捉周期性特征方面的不足,提出多时间尺度自适应图注意Transformer(MSAGAFormer)模型. 该模型将短期、中期和长期历史交通数据各自细分为低、中、高3种不同时间尺度的数据序列,并采用压缩机制以降低冗余信息、提升时序特征的表达效率. 通过设计时空嵌入方法对节点位置与时间特征进行编码,强化模型对时空数据的理解. 空间层采用基于GAT的多头注意力机制以建模动态空间相关性,时间层引入多尺度时间注意力结构以捕获不同时间粒度下的动态变化特征. 在PEMS数据集上的实验结果显示,MSAGAFormer在预测精度上优于目前较为先进的Trendformer、ATST-GCN、STTN等模型.


关键词: 智能交通系统,  交通流量预测,  Transformer,  自适应图,  多时间尺度 
Fig.1 Architecture of MSAGAFormer
Fig.2 Spatiotemporal aggregation module
Fig.3 Spatiotemporal gated convolution module
数据集nsto/min时间戳时间范围
PEMS043075169922018/01/01—2018/02/28
PEMS081705178562018/07/01—2018/08/31
Tab.1 Dataset description
模型Tp=15 minTp=30 minTp=45 minTp=60 min
RMSEMAERMSEMAERMSEMAERMSEMAE
STGCN30.4919.9833.2321.4636.8724.4839.4126.93
T-GCN29.3920.3232.7921.9436.2724.0639.8827.27
DCRNN28.6519.0632.7222.0935.7423.8941.1928.51
DMSTGCN28.0218.8131.4621.0234.3822.6438.6726.15
Trendformer27.4718.6230.5920.5633.5322.3234.1723.54
STTN26.9517.9829.7619.4831.2821.7633.3522.43
ATST-GCN27.3218.4530.1920.0730.2421.0332.4922.17
MSAGAFormer25.7816.3327.5117.4227.1317.9428.8918.76
Tab.2 Comparison results of RMSE and MAE for different models on PEMS04 dataset
模型Tp=15 minTp=30 minTp=45 minTp=60 min
RMSEMAERMSEMAERMSEMAERMSEMAE
STGCN26.9819.7729.4721.5232.8724.1434.5925.97
T-GCN25.3420.2228.7422.4133.0525.3235.1527.64
DCRNN24.4617.9327.5920.3731.4923.7832.9225.13
DMSTGCN24.0117.5826.8919.8429.3422.0831.7324.68
Trendformer23.7117.3326.1519.5428.8421.8530.5824.06
STTN22.7816.9525.4819.1627.3820.9229.0422.87
ATST-GCN23.5817.2625.5819.2827.1620.7928.7822.67
MSAGAFormer21.8716.0523.8717.2625.1317.7626.6319.14
Tab.3 Comparison results of RMSE and MAE for different models on PEMS08 dataset
模型Tp=15 minTp=30 minTp=45 minTp=60 min
RMSEMAERMSEMAERMSEMAERMSEMAE
w/o MH26.7618.1230.1420.3436.2523.0536.1726.77
w/o LH28.6719.7831.6621.3129.4319.3230.4319.71
w/o LM33.3227.0328.3218.7629.1219.4535.4125.88
w/o adp27.0117.6828.6318.5929.8919.6930.8320.89
w/o dis27.3818.1228.9618.9630.1220.4331.2521.54
MSAGAFormer25.7816.3327.5117.4227.1317.9428.8918.76
Tab.4 Comparison results of RMSE and MAE for different variants on PEMS04 dataset
模型Tp=15 minTp=30 minTp=45 minTp=60 min
RMSEMAERMSEMAERMSEMAERMSEMAE
w/o MH22.9616.7825.6119.3229.2520.4534.7326.33
w/o LH24.0817.5824.2618.0627.1319.0831.9322.14
w/o LM29.7220.4726.3419.8628.7119.4327.6520.42
w/o adp23.7316.9724.5818.7527.1619.1430.6920.04
w/o dis27.3818.1228.9618.9630.1220.4331.2521.54
MSAGAFormer21.8716.0523.8717.2625.1317.7626.6319.14
Tab.5 Comparison results of RMSE and MAE for different variants on PEMS08 dataset
Fig.4 Visual example of test set prediction
[1]   NAHELIYA B, REDHU P, KUMAR K A review on developments in evolutionary computation approaches for road traffic flow prediction[J]. Archives of Computational Methods in Engineering, 2025, 32 (3): 1499- 1523
doi: 10.1007/s11831-024-10189-1
[2]   GOMES B, COELHO J, AIDOS H A survey on traffic flow prediction and classification[J]. Intelligent Systems with Applications, 2023, 20: 200268
doi: 10.1016/j.iswa.2023.200268
[3]   ZHANG S, TONG H, XU J, et al Graph convolutional networks: a comprehensive review[J]. Computational Social Networks, 2019, 6 (1): 11
doi: 10.1186/s40649-019-0069-y
[4]   SUN C, LI C, LIN X, et al Attention-based graph neural networks: a survey[J]. Artificial Intelligence Review, 2023, 56 (2): 2263- 2310
[5]   LI Y, YU R, SHAHABI C, et al. Diffusion convolutional recurrent neural network: data-driven traffic forecasting [C]// 6th International Conference on Learning Representations. Vancouver: MIT Press, 2018: 1−16.
[6]   ZHAO L, SONG Y, ZHANG C, et al T-GCN: a temporal graph convolutional network for traffic prediction[J]. IEEE Transactions on Intelligent Transportation Systems, 2020, 21 (9): 3848- 3858
doi: 10.1109/TITS.2019.2935152
[7]   YU B, YIN H, ZHU Z. Spatio-temporal graph convolutional networks: a deep learning framework for traffic forecasting [C]// 27th International Joint Conference on Artificial Intelligence. Stockholm: International Joint Conferences on Artificial Intelligence Organization, 2018: 3634−3640.
[8]   ZUO J, ZEITOUNI K, TAHER Y, et al Graph convolutional networks for traffic forecasting with missing values[J]. Data Mining and Knowledge Discovery, 2023, 37 (2): 913- 947
doi: 10.1007/s10618-022-00903-7
[9]   SHIN Y, YOON Y PGCN: progressive graph convolutional networks for spatial–temporal traffic forecasting[J]. IEEE Transactions on Intelligent Transportation Systems, 2024, 25 (7): 7633- 7644
doi: 10.1109/TITS.2024.3349565
[10]   YU W, HUANG X, QIU Y, et al GSTC-Unet: a U-shaped multi-scaled spatiotemporal graph convolutional network with channel self-attention mechanism for traffic flow forecasting[J]. Expert Systems with Applications, 2023, 232: 120724
doi: 10.1016/j.eswa.2023.120724
[11]   XU M, DAI W, LIU C, et al. Spatial-temporal transformer networks for traffic flow forecasting [EB/OL]. (2021−05−29). https://arxiv.org/abs/2001.02908.pdf.
[12]   JIANG J, HAN C, ZHAO W X, et al PDFormer: propagation delay-aware dynamic long-range transformer for traffic flow prediction[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2023, 37 (4): 4365- 4373
doi: 10.1609/aaai.v37i4.25556
[13]   WU H, XU J, WANG J, et al Autoformer: decomposition transformers with auto-correlation for long-term series forecasting[J]. Advances in Neural Information Processing Systems, 2021, 34: 22419- 22430
[14]   BAI J, ZHU J, SONG Y, et al A3T-GCN: attention temporal graph convolutional network for traffic forecasting[J]. ISPRS International Journal of Geo-Information, 2021, 10 (7): 485
doi: 10.3390/ijgi10070485
[15]   REN Q, LI Y, LIU Y Transformer-enhanced periodic temporal convolution network for long short-term traffic flow forecasting[J]. Expert Systems with Applications, 2023, 227: 120203
doi: 10.1016/j.eswa.2023.120203
[16]   GAO M, DU Z, QIN H, et al Dynamic multi-scale spatial-temporal graph convolutional network for traffic flow prediction[J]. Knowledge-Based Systems, 2024, 305: 112586
doi: 10.1016/j.knosys.2024.112586
[17]   CAI W, LIANG Y, LIU X, et al MSGNet: learning multi-scale inter-series correlations for multivariate time series forecasting[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2024, 38 (10): 11141- 11149
doi: 10.1609/aaai.v38i10.28991
[18]   LUO Q, HE S, HAN X, et al LSTTN: a long-short term transformer-based spatiotemporal neural network for traffic flow forecasting[J]. Knowledge-Based Systems, 2024, 293: 111637
doi: 10.1016/j.knosys.2024.111637
[19]   WANG B, WANG J ST-MGAT: spatio-temporal multi-head graph attention network for Traffic prediction[J]. Physica A: Statistical Mechanics and its Applications, 2022, 603: 127762
doi: 10.1016/j.physa.2022.127762
[20]   汪鸣, 彭舰, 黄飞虎 基于多时间尺度时空图网络的交通流量预测模型[J]. 计算机科学, 2022, 49 (8): 40- 48
WANG Ming, PENG Jian, HUANG Feihu Multi-time scale spatial-temporal graph neural network for traffic flow prediction[J]. Computer Science, 2022, 49 (8): 40- 48
[21]   HAN L, DU B, SUN L, et al. Dynamic and multi-faceted spatio-temporal deep learning for traffic speed forecasting [C]// 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. [S. l. ]: ACM, 2021: 547−555.
[22]   HUANG L, ZHU F, LI Z. Trendformer: trend adaptive transformer for traffic flow prediction [C]// 5th International Conference on Data Science and Information Technology. Shanghai: IEEE, 2022: 1–6.
[23]   邹正标, 刘毅志, 廖祝华, 等 动态交通流量预测的时空注意力图卷积网络[J]. 山东大学学报: 工学版, 2024, 54 (5): 50- 61
ZOU Zhengbiao, LIU Yizhi, LIAO Zhuhua, et al Attention-based spatio-temporal graph convolutional network for dynamic traffic flow prediction[J]. Journal of Shandong University: Engineering Science, 2024, 54 (5): 50- 61
[1] Xuan MENG,Xueying ZHANG,Ying SUN,Yaru ZHOU. EEG emotion recognition based on electrode arrangement and Transformer[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(9): 1872-1880.
[2] Yue HOU,Tiantian WANG,Xin ZHANG,Jie YIN. Traffic flow forecasting with multi-resolution trend period decoupling interaction[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(7): 1362-1372.
[3] Xinjian XIANG,Tianshun YUAN,Yaqiang HE,Chengli WANG. Traffic flow prediction based on time series decomposition and soft thresholding temporal convolution[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(7): 1353-1361.
[4] Jie LIU,You WU,Jiahe TIAN,Ke HAN. Based on improved Transformer for super-resolution reconstruction of lung CT images[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(7): 1434-1442.
[5] Yongqing CAI,Cheng HAN,Wei QUAN,Wudi CHEN. Visual induced motion sickness estimation model based on attention mechanism[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(6): 1110-1118.
[6] Mengyao ZHANG,Jie ZHOU,Wenting LI,Yong ZHAO. Three-dimensional mesh segmentation framework using global and local information[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(5): 912-919.
[7] Dejun ZHANG,Yanzi BAI,Feng CAO,Yiqi WU,Zhanya XU. Point cloud Transformer adapter for dense prediction task[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(5): 920-928.
[8] Li MA,Yongshun WANG,Yao HU,Lei FAN. Pre-trained long-short spatiotemporal interleaved Transformer for traffic flow prediction applications[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(4): 669-678.
[9] Zhenli ZHANG,Xinkai HU,Fan LI,Zhicheng FENG,Zhichao CHEN. Semantic segmentation algorithm for multiscale remote sensing images based on CNN and Efficient Transformer[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(4): 778-786.
[10] Xiaofen JIA,Zixiang WANG,Baiting ZHAO,Zhenhuan LIANG,Rui HU. Image super-resolution reconstruction method driven by two-dimensional cross-fusion[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(12): 2516-2526.
[11] Yan YANG,Cunpeng JIA. An efficient image dehazing algorithm with Agent Attention for domain feature interaction[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(12): 2527-2538.
[12] Bing YANG,Chuyang XU,Jinliang YAO,Xueqin XIANG. 3D hand pose estimation method based on monocular RGB images[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(1): 18-26.
[13] Jinye LI,Yongqiang LI. Spatial-temporal multi-graph convolution for traffic flow prediction by integrating knowledge graphs[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(7): 1366-1376.
[14] Xianwei MA,Chaohui FAN,Weizhi NIE,Dong LI,Yiqun ZHU. Robust fault diagnosis method for failure sensors[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(7): 1488-1497.
[15] Kang FAN,Ming’en ZHONG,Jiawei TAN,Zehui ZHAN,Yan FENG. Traffic scene perception algorithm with joint semantic segmentation and depth estimation[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(4): 684-695.