Search Results for author: Chenran Li

Found 5 papers, 0 papers with code

Quantifying Agent Interaction in Multi-agent Reinforcement Learning for Cost-efficient Generalization

no code implementations11 Oct 2023 Yuxin Chen, Chen Tang, Ran Tian, Chenran Li, Jinning Li, Masayoshi Tomizuka, Wei Zhan

We observe that, generally, a more diverse set of co-play agents during training enhances the generalization performance of the ego agent; however, this improvement varies across distinct scenarios and environments.

Multi-agent Reinforcement Learning

Residual Q-Learning: Offline and Online Policy Customization without Value

no code implementations NeurIPS 2023 Chenran Li, Chen Tang, Haruki Nishimura, Jean Mercat, Masayoshi Tomizuka, Wei Zhan

Specifically, we formulate the customization problem as a Markov Decision Process (MDP) with a reward function that combines 1) the inherent reward of the demonstration; and 2) the add-on reward specified by the downstream task.

Imitation Learning Q-Learning

Editing Driver Character: Socially-Controllable Behavior Generation for Interactive Traffic Simulation

no code implementations24 Mar 2023 Wei-Jer Chang, Chen Tang, Chenran Li, Yeping Hu, Masayoshi Tomizuka, Wei Zhan

To ensure that autonomous vehicles take safe and efficient maneuvers in different interactive traffic scenarios, we should be able to evaluate autonomous vehicles against reactive agents with different social characteristics in the simulation environment.

Autonomous Driving

Analyzing and Enhancing Closed-loop Stability in Reactive Simulation

no code implementations9 Aug 2022 Wei-Jer Chang, Yeping Hu, Chenran Li, Wei Zhan, Masayoshi Tomizuka

In this paper, we aim to provide a thorough stability analysis of the reactive simulation and propose a solution to enhance the stability.

Cannot find the paper you are looking for? You can Submit a new open access paper.