Search Results for author: David Isele

Found 20 papers, 4 papers with code

Interactive Autonomous Navigation with Internal State Inference and Interactivity Estimation

no code implementations27 Nov 2023 Jiachen Li, David Isele, Kanghoon Lee, Jinkyoo Park, Kikuo Fujimura, Mykel J. Kochenderfer

Moreover, we propose an interactivity estimation mechanism based on the difference between predicted trajectories in these two situations, which indicates the degree of influence of the ego agent on other agents.

Autonomous Navigation counterfactual +4

Robust Driving Policy Learning with Guided Meta Reinforcement Learning

no code implementations19 Jul 2023 Kanghoon Lee, Jiachen Li, David Isele, Jinkyoo Park, Kikuo Fujimura, Mykel J. Kochenderfer

Although deep reinforcement learning (DRL) has shown promising results for autonomous navigation in interactive traffic scenarios, existing work typically adopts a fixed behavior policy to control social vehicles in the training environment.

Autonomous Navigation Meta Reinforcement Learning +1

Active Uncertainty Reduction for Safe and Efficient Interaction Planning: A Shielding-Aware Dual Control Approach

1 code implementation1 Feb 2023 Haimin Hu, David Isele, Sangjae Bae, Jaime F. Fisac

To ensure the safe operation of the interacting agents, we use a runtime safety filter (also referred to as a "shielding" scheme), which overrides the robot's dual control policy with a safety fallback strategy when a safety-critical event is imminent.

Autonomous Vehicles Model Predictive Control +1

Recursive Reasoning Graph for Multi-Agent Reinforcement Learning

no code implementations6 Mar 2022 Xiaobai Ma, David Isele, Jayesh K. Gupta, Kikuo Fujimura, Mykel J. Kochenderfer

Multi-agent reinforcement learning (MARL) provides an efficient way for simultaneously learning policies for multiple agents interacting with each other.

Multi-agent Reinforcement Learning reinforcement-learning +1

Reinforcement Learning with Iterative Reasoning for Merging in Dense Traffic

no code implementations25 May 2020 Maxime Bouton, Alireza Nakhaei, David Isele, Kikuo Fujimura, Mykel J. Kochenderfer

This approach exposes the agent to a broad variety of behaviors during training, which promotes learning policies that are robust to model discrepancies.

Autonomous Vehicles reinforcement-learning +1

Interactive Decision Making for Autonomous Vehicles in Dense Traffic

no code implementations27 Sep 2019 David Isele

Dense urban traffic environments can produce situations where accurate prediction and dynamic models are insufficient for successful autonomous vehicle motion planning.

Autonomous Vehicles Decision Making +1

Interaction-Aware Multi-Agent Reinforcement Learning for Mobile Agents with Individual Goals

no code implementations27 Sep 2019 Anahita Mohseni-Kabir, David Isele, Kikuo Fujimura

We investigate the problem of multi-agent reinforcement learning, focusing on decentralized learning in non-stationary domains for mobile robot navigation.

Autonomous Driving Multi-agent Reinforcement Learning +3

Safe Reinforcement Learning on Autonomous Vehicles

no code implementations27 Sep 2019 David Isele, Alireza Nakhaei, Kikuo Fujimura

There have been numerous advances in reinforcement learning, but the typically unconstrained exploration of the learning process prevents the adoption of these methods in many safety critical applications.

Autonomous Vehicles reinforcement-learning +2

Uncertainty-Aware Data Aggregation for Deep Imitation Learning

no code implementations7 May 2019 Yuchen Cui, David Isele, Scott Niekum, Kikuo Fujimura

Our analysis shows that UAIL outperforms existing data aggregation algorithms on a series of benchmark tasks.

Autonomous Driving Imitation Learning

CM3: Cooperative Multi-goal Multi-stage Multi-agent Reinforcement Learning

1 code implementation ICLR 2020 Jiachen Yang, Alireza Nakhaei, David Isele, Kikuo Fujimura, Hongyuan Zha

To address both challenges, we restructure the problem into a novel two-stage curriculum, in which single-agent goal attainment is learned prior to learning multi-agent cooperation, and we derive a new multi-goal multi-agent policy gradient with a credit function for localized credit assignment.

Autonomous Vehicles Efficient Exploration +3

Selective Experience Replay for Lifelong Learning

1 code implementation28 Feb 2018 David Isele, Akansel Cosgun

Deep reinforcement learning has emerged as a powerful tool for a variety of learning tasks, however deep nets typically exhibit forgetting when learning multiple tasks in sequence.

Transferring Autonomous Driving Knowledge on Simulated and Real Intersections

no code implementations30 Nov 2017 David Isele, Akansel Cosgun

We view intersection handling on autonomous vehicles as a reinforcement learning problem, and study its behavior in a transfer learning setting.

Autonomous Driving reinforcement-learning +2

Using Task Descriptions in Lifelong Machine Learning for Improved Performance and Zero-Shot Transfer

no code implementations10 Oct 2017 David Isele, Mohammad Rostami, Eric Eaton

Knowledge transfer between tasks can improve the performance of learned models, but requires an accurate estimate of the inter-task relationships to identify the relevant knowledge to transfer.

BIG-bench Machine Learning Dictionary Learning +2

Navigating Occluded Intersections with Autonomous Vehicles using Deep Reinforcement Learning

no code implementations2 May 2017 David Isele, Reza Rahimi, Akansel Cosgun, Kaushik Subramanian, Kikuo Fujimura

Providing an efficient strategy to navigate safely through unsignaled intersections is a difficult task that requires determining the intent of other drivers.

Autonomous Vehicles Navigate +2

Analyzing Knowledge Transfer in Deep Q-Networks for Autonomously Handling Multiple Intersections

no code implementations2 May 2017 David Isele, Akansel Cosgun, Kikuo Fujimura

We analyze how the knowledge to autonomously handle one type of intersection, represented as a Deep Q-Network, translates to other types of intersections (tasks).

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.