1. School of Information Science and Engineering, Zhejiang Sci-Tech University, Hangzhou 310018, China 2. Jiaxing Key Laboratory of Smart Transportations, Jiaxing University, Jiaxing 314001, China 3. State Key Laboratory of Industrial Control Technology, Institute ofCyber-Systems and Control, Zhejiang University, Hangzhou 310027, China 4. School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou 310018, China
A single intersection traffic signal control method based on the asynchronous advantage actor-critic (A3C) algorithm was proposed aiming at high cost of model learning and decision making in the existing traffic signal control methods based on deep reinforcement learning. Vehicle weight gain network was constructed from two different dimensions at the input side of the model, namely intersections and lanes, in order to preprocess the collected vehicle state information. A new reward mechanism was designed and an A3C algorithm that integrated vehicle weight gain networks was proposed. The simulation test results based on the microscopic traffic simulation software simulation of urban mobility (SUMO) show that the proposed method achieves better traffic signal control performance under three different traffic flow conditions of low, medium and high levels compared with traditional traffic signal control methods and benchmark reinforcement learning methods.
Baolin YE,Ruitao SUN,Weimin WU,Bin CHEN,Qing YAO. Traffic signal control method based on asynchronous advantage actor-critic. Journal of ZheJiang University (Engineering Science), 2024, 58(8): 1671-1680.
Fig.7Framework of A3C based traffic signal control fused with vehicle weight gain network
实验对比
参数
数值
融合权重增益网络的A3C 算法、传统A3C算法
Actor网络学习率
0.000 02
Critic网络学习率
0.000 2
Actor网络神经元数量
200
Critic网络神经元数量
100
折扣因子
0.9
训练步数
200
训练时间/s
7 200
DQN算法
学习率
0.000 02
神经元数量
200
折扣因子
0.9
训练步数
200
训练时间/s
7 200
Tab.2Parameter setting of various deep reinforcement learning model in comparative experiment
参数
数值
车道长度/m
100
平均车辆长度/m
5
最小车辆间隔/m
2.5
车辆最大速度/(m·s?1)
13.89
车辆最大加速度/(m·s?2)
2.6
车辆最大减速度/(m·s?2)
4.6
黄灯时间/s
3
相位保持时绿灯持续时间/s
5
相位最小绿灯时间/s
15
车辆直行概率
0.5
车辆左转概率
0.3
车辆右转概率
0.2
Tab.3Parameter setting of traffic simulation environment
Fig.8Cumulative round reward
Fig.9Average waiting time of vehicles
Fig.10Average queue length of vehicles
Fig.11Average number of vehicle stops
控制方法
低流量
中流量
高流量
W/s
L/m
P
W/s
L/m
P
W/s
L/m
P
固定配时
13.85
5.83
0.77
15.68
6.28
0.91
17.31
8.62
1.01
自适应
5.40
3.73
0.55
7.85
4.00
0.57
13.34
4.61
0.63
DQN
6.14
3.54
0.47
`7.38
3.81
0.55
8.32
4.05
0.59
A3C
5.88
3.43
0.46
6.90
3.73
0.49
7.45
3.83
0.56
LVWG-A3C
5.13
3.28
0.43
6.11
3.52
0.46
7.22
3.76
0.52
IVWG-A3C
4.72
3.18
0.40
4.92
3.26
0.43
6.31
3.50
0.47
Tab.4Test result of different traffic signal control methods under isolated intersection condition
[1]
YE B-L, WU W, RUAN K, et al A survey of model predictive control methods for traffic signal control[J]. IEEE/CAA Journal of Automatica Sinica, 2019, 6 (3): 623- 640
doi: 10.1109/JAS.2019.1911471
[2]
彭渠栩 优化双向“绿波带”关键路口控制参数算法的研究[J]. 应用数学进展, 2023, 12 (2): 781 PENG Quxu Research on the algorithm for optimizing the key intersection control parameters of two-way “Green Wave Belt”[J]. Advances in Applied Mathematics, 2023, 12 (2): 781
[3]
刘建伟, 高峰, 罗雄麟 基于值函数和策略梯度的深度强化学习综述[J]. 计算机学报, 2019, 42 (6): 1406- 1438 LIU Jianwei, GAO Feng, LUO Xionglin Review of deep reinforcement learning based on value functions and policy gradients[J]. Chinese Journal of Computers, 2019, 42 (6): 1406- 1438
doi: 10.11897/SP.J.1016.2019.01406
[4]
刘义, 何均宏 强化学习在城市交通信号灯控制方法中的应用[J]. 科技导报, 2019, 37 (6): 84- 90 LIU Yi, HE Junhong application of reinforcement learning in city traffic signal control methods.[J]. Science and Technology Review, 2019, 37 (6): 84- 90
[5]
FARID A, HUSSAIN F, KHAN K, et al A fast and accurate real-time vehicle detection method using deep learning for unconstrained environments[J]. Applied Sciences, 2023, 13 (5): 3059
doi: 10.3390/app13053059
[6]
MNIH V, KAVUKCUOGLU K, SILVER D, et al Human-level control through deep reinforcement learning[J]. Nature, 2015, 518 (7540): 529- 533
doi: 10.1038/nature14236
[7]
WANG M, WU L, LI J, et al Traffic signal control with reinforcement learning based on region-aware cooperative strategy[J]. IEEE Transactions on Intelligent Transportation Systems, 2021, 23 (7): 6774- 6785
[8]
WANG Z, YANG K, LI L, et al Traffic signal priority control based on shared experience multi-agent deep reinforcement learning[J]. IET Intelligent Transport Systems, 2023, 17 (7): 1363- 1379
[9]
MA D, ZHOU B, SONG X, et al A deep reinforcement learning approach to traffic signal control with temporal traffic pattern mining[J]. IEEE Transactions on Intelligent Transportation Systems, 2021, 23 (8): 11789- 11800
[10]
BOUKTIF S, CHENIKI A, OUNI A Traffic signal control using hybrid action space deep reinforcement learning[J]. Sensors, 2021, 21 (7): 2302
doi: 10.3390/s21072302
[11]
CHU T, WANG J, CODECA L, et al Multi-agent deep reinforcement learning for large-scale traffic signal control[J]. IEEE Transactions on Intelligent Transportation Systems, 2019, 21 (3): 1086- 1095
[12]
刘智敏, 叶宝林, 朱耀东, 等 基于深度强化学习的交通信号控制方法[J]. 浙江大学学报: 工学版, 2022, 56 (6): 1249- 1256 LIU Zhimin, YE Baolin, ZHU Yaodong, et al Traffic signal control methods based on deep reinforcement learning[J]. Journal of Zhejiang University: Engineering Science, 2022, 56 (6): 1249- 1256
[13]
WU T, ZHOU P, LIU K, et al Multi-agent deep reinforcement learning for urban traffic light control in vehicular networks[J]. IEEE Transactions on Vehicular Technology, 2020, 69 (8): 8243- 8256
doi: 10.1109/TVT.2020.2997896
[14]
赵乾, 张灵, 赵刚, 等 双环相位结构约束下的强化学习交通信号控制方法[J]. 交通运输工程与信息学报, 2023, 21 (1): 19- 28 ZHAO Qian, ZHANG Ling, ZHAO Gang, et al Reinforcement learning traffic signal control method under dual-ring phase structure constraints[J]. Journal of Transportation Engineering and Information, 2023, 21 (1): 19- 28
[15]
王安麟, 孙晓龙, 钟馥声 一种基于通行优先度规则的城市交通信号自组织控制方法[J]. 重庆交通大学学报: 自然科学版, 2018, 37 (2): 96 WANG Anlin, SUN Xiaolong, ZHONG Fusheng A self-organized control method for urban traffic signals based on priority rules for passage[J]. Journal of Chongqing Jiaotong University: Natural Science, 2018, 37 (2): 96
[16]
ASIAIN E, CLEMPNER J B, POZNYAK A S Controller exploitation-exploration reinforcement learning architecture for computing near-optimal policies[J]. Soft Computing, 2019, 23 (11): 3591- 3604
doi: 10.1007/s00500-018-3225-7
[17]
TROIA S, SAPIENZA F, VARÉ L, et al On deep reinforcement learning for traffic engineering in SD-WAN[J]. IEEE Journal on Selected Areas in Communications, 2021, 39 (7): 2198- 2212
doi: 10.1109/JSAC.2020.3041385
[18]
TAN K L, SHARMA A, SARKAR S Robust deep reinforcement learning for traffic signal control[J]. Journal of Big Data Analytics in Transportation, 2020, 2: 263- 274
doi: 10.1007/s42421-020-00029-6
[19]
LI M, LI Z, XU C, et al Deep reinforcement learning-based vehicle driving strategy to reduce crash risks in traffic oscillations[J]. Transportation Research Record, 2020, 2674 (10): 42- 54
doi: 10.1177/0361198120937976
[20]
CHU T, WANG J, CODECÀ L, et al Multi-agent deep reinforcement learning for large-scale traffic signal control[J]. IEEE Transactions on Intelligent Transportation Systems, 2020, 21 (3): 1086- 1095
doi: 10.1109/TITS.2019.2901791