CoordiQ : Coordinated Q-learning for Electric Vehicle Charging Recommendation

28 Jan 2021  ·  Carter Blum, Hao liu, Hui Xiong ·

Electric vehicles have been rapidly increasing in usage, but stations to charge them have not always kept up with demand, so efficient routing of vehicles to stations is critical to operating at maximum efficiency. Deciding which stations to recommend drivers to is a complex problem with a multitude of possible recommendations, volatile usage patterns and temporally extended consequences of recommendations. Reinforcement learning offers a powerful paradigm for solving sequential decision-making problems, but traditional methods may struggle with sample efficiency due to the high number of possible actions. By developing a model that allows complex representations of actions, we improve outcomes for users of our system by over 30% when compared to existing baselines in a simulation. If implemented widely, these better recommendations can globally save over 4 million person-hours of waiting and driving each year.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here