TY - JOUR
T1 - Real-Time Adaptive Traffic Signal Control in a Connected and Automated Vehicle Environment
T2 - Optimisation of Signal Planning with Reinforcement Learning under Vehicle Speed Guidance
AU - Maadi, Saeed
AU - Stein, Sebastian
AU - Hong, Jinhyun
AU - Murray-Smith, Roderick
N1 - Publisher Copyright:
© 2022 by the authors.
PY - 2022/10
Y1 - 2022/10
N2 - Adaptive traffic signal control (ATSC) is an effective method to reduce traffic congestion in modern urban areas. Many studies adopted various approaches to adjust traffic signal plans according to real-time traffic in response to demand fluctuations to improve urban network performance (e.g., minimise delay). Recently, learning-based methods such as reinforcement learning (RL) have achieved promising results in signal plan optimisation. However, adopting these self-learning techniques in future traffic environments in the presence of connected and automated vehicles (CAVs) remains largely an open challenge. This study develops a real-time RL-based adaptive traffic signal control that optimises a signal plan to minimise the total queue length while allowing the CAVs to adjust their speed based on a fixed timing strategy to decrease total stop delays. The highlight of this work is combining a speed guidance system with a reinforcement learning-based traffic signal control. Two different performance measures are implemented to minimise total queue length and total stop delays. Results indicate that the proposed method outperforms a fixed timing plan (with optimal speed advisory in a CAV environment) and traditional actuated control, in terms of average stop delay of vehicle and queue length, particularly under saturated and oversaturated conditions.
AB - Adaptive traffic signal control (ATSC) is an effective method to reduce traffic congestion in modern urban areas. Many studies adopted various approaches to adjust traffic signal plans according to real-time traffic in response to demand fluctuations to improve urban network performance (e.g., minimise delay). Recently, learning-based methods such as reinforcement learning (RL) have achieved promising results in signal plan optimisation. However, adopting these self-learning techniques in future traffic environments in the presence of connected and automated vehicles (CAVs) remains largely an open challenge. This study develops a real-time RL-based adaptive traffic signal control that optimises a signal plan to minimise the total queue length while allowing the CAVs to adjust their speed based on a fixed timing strategy to decrease total stop delays. The highlight of this work is combining a speed guidance system with a reinforcement learning-based traffic signal control. Two different performance measures are implemented to minimise total queue length and total stop delays. Results indicate that the proposed method outperforms a fixed timing plan (with optimal speed advisory in a CAV environment) and traditional actuated control, in terms of average stop delay of vehicle and queue length, particularly under saturated and oversaturated conditions.
KW - adaptive traffic signal control
KW - connected and automated vehicles
KW - microscopic traffic simulation
KW - reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85139812625&partnerID=8YFLogxK
U2 - 10.3390/s22197501
DO - 10.3390/s22197501
M3 - Article
C2 - 36236600
AN - SCOPUS:85139812625
SN - 1424-8220
VL - 22
JO - Sensors
JF - Sensors
IS - 19
M1 - 7501
ER -