Anti-Jerk On-Ramp Merging Using Deep Reinforcement Learning

15 May 2020  ·  Lin Yuan, McPhee John, Azad Nasser L. ·

Deep Reinforcement Learning (DRL) is used here for decentralized decision-making and longitudinal control for high-speed on-ramp merging. The DRL environment state includes the states of five vehicles: the merging vehicle, along with two preceding and two following vehicles when the merging vehicle is or is projected on the main road. The control action is the acceleration of the merging vehicle. Deep Deterministic Policy Gradient (DDPG) is the DRL algorithm for training to output continuous control actions. We investigated the relationship between collision avoidance for safety and jerk minimization for passenger comfort in the multi-objective reward function by obtaining the Pareto front. We found that, with a small jerk penalty in the multi-objective reward function, the vehicle jerk could be reduced by 73% compared with no jerk penalty while the collision rate was maintained at zero. Regardless of the jerk penalty, the merging vehicle exhibited decision-making strategies such as merging ahead or behind a main-road vehicle.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here