A Deep Reinforcement Learning Framework for Eco-driving in Connected and Automated Hybrid Electric Vehicles

13 Jan 2021  ·  Zhaoxuan Zhu, Shobhit Gupta, Abhishek Gupta, Marcello Canova ·

Connected and Automated Vehicles (CAVs), in particular those with multiple power sources, have the potential to significantly reduce fuel consumption and travel time in real-world driving conditions. In particular, the Eco-driving problem seeks to design optimal speed and power usage profiles based upon look-ahead information from connectivity and advanced mapping features, to minimize the fuel consumption over a given itinerary. In this work, the Eco-driving problem is formulated as a Partially Observable Markov Decision Process (POMDP), which is then solved with a state-of-art Deep Reinforcement Learning (DRL) Actor Critic algorithm, Proximal Policy Optimization. An Eco-driving simulation environment is developed for training and evaluation purposes. To benchmark the performance of the DRL controller, a baseline controller representing the human driver, a trajectory optimization algorithm and the wait-and-see deterministic optimal solution are presented. With a minimal onboard computational requirement and a comparable travel time, the DRL controller reduces the fuel consumption by more than 17% compared against the baseline controller by modulating the vehicle velocity over the route and performing energy-efficient approach and departure at signalized intersections, over-performing the more computationally demanding trajectory optimization method

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here