Search Results for author: Rowan Mcallister

Found 19 papers, 10 papers with code

In-Distribution Barrier Functions: Self-Supervised Policy Filters that Avoid Out-of-Distribution States

no code implementations27 Jan 2023 Fernando Castañeda, Haruki Nishimura, Rowan Mcallister, Koushil Sreenath, Adrien Gaidon

Learning-based control approaches have shown great promise in performing complex tasks directly from high-dimensional perception data for real robotic systems.

RAP: Risk-Aware Prediction for Robust Planning

1 code implementation4 Oct 2022 Haruki Nishimura, Jean Mercat, Blake Wulfe, Rowan Mcallister, Adrien Gaidon

Robust planning in interactive scenarios requires predicting the uncertain future to make risk-aware decisions.

Control-Aware Prediction Objectives for Autonomous Driving

no code implementations28 Apr 2022 Rowan Mcallister, Blake Wulfe, Jean Mercat, Logan Ellis, Sergey Levine, Adrien Gaidon

Autonomous vehicle software is typically structured as a modular pipeline of individual components (e. g., perception, prediction, and planning) to help separate concerns into interpretable sub-tasks.

Autonomous Driving Trajectory Prediction

Dynamics-Aware Comparison of Learned Reward Functions

no code implementations ICLR 2022 Blake Wulfe, Ashwin Balakrishna, Logan Ellis, Jean Mercat, Rowan Mcallister, Adrien Gaidon

The ability to learn reward functions plays an important role in enabling the deployment of intelligent agents in the real world.

Contingencies from Observations: Tractable Contingency Planning with Learned Behavior Models

1 code implementation21 Apr 2021 Nicholas Rhinehart, Jeff He, Charles Packer, Matthew A. Wright, Rowan Mcallister, Joseph E. Gonzalez, Sergey Levine

Humans have a remarkable ability to make decisions by accurately reasoning about future events, including the future behaviors and states of mind of other agents.

Outcome-Driven Reinforcement Learning via Variational Inference

no code implementations NeurIPS 2021 Tim G. J. Rudner, Vitchyr H. Pong, Rowan Mcallister, Yarin Gal, Sergey Levine

While reinforcement learning algorithms provide automated acquisition of optimal policies, practical application of such methods requires a number of design decisions, such as manually designing reward functions that not only define the task, but also provide sufficient shaping to accomplish it.

reinforcement-learning Reinforcement Learning +2

Can Autonomous Vehicles Identify, Recover From, and Adapt to Distribution Shifts?

2 code implementations ICML 2020 Angelos Filos, Panagiotis Tigas, Rowan Mcallister, Nicholas Rhinehart, Sergey Levine, Yarin Gal

Out-of-training-distribution (OOD) scenarios are a common challenge of learning agents at deployment, typically leading to arbitrary deductions and poorly-informed decisions.

Autonomous Vehicles Out of Distribution (OOD) Detection

Learning Invariant Representations for Reinforcement Learning without Reconstruction

2 code implementations18 Jun 2020 Amy Zhang, Rowan McAllister, Roberto Calandra, Yarin Gal, Sergey Levine

We study how representation learning can accelerate reinforcement learning from rich observations, such as images, without relying either on domain knowledge or pixel-reconstruction.

Causal Inference reinforcement-learning +3

Model-Based Meta-Reinforcement Learning for Flight with Suspended Payloads

2 code implementations23 Apr 2020 Suneel Belkhale, Rachel Li, Gregory Kahn, Rowan Mcallister, Roberto Calandra, Sergey Levine

Our experiments demonstrate that our online adaptation approach outperforms non-adaptive methods on a series of challenging suspended payload transportation tasks.

Meta-Learning Meta Reinforcement Learning +3

Safety Augmented Value Estimation from Demonstrations (SAVED): Safe Deep Model-Based RL for Sparse Cost Robotic Tasks

1 code implementation31 May 2019 Brijen Thananjeyan, Ashwin Balakrishna, Ugo Rosolia, Felix Li, Rowan Mcallister, Joseph E. Gonzalez, Sergey Levine, Francesco Borrelli, Ken Goldberg

Reinforcement learning (RL) for robotics is challenging due to the difficulty in hand-engineering a dense cost function, which can lead to unintended behavior, and dynamical uncertainty, which makes exploration and constraint satisfaction challenging.

Model-based Reinforcement Learning reinforcement-learning +2

PRECOG: PREdiction Conditioned On Goals in Visual Multi-Agent Settings

2 code implementations ICCV 2019 Nicholas Rhinehart, Rowan Mcallister, Kris Kitani, Sergey Levine

For autonomous vehicles (AVs) to behave appropriately on roads populated by human-driven vehicles, they must be able to reason about the uncertain intentions and decisions of other drivers from rich perceptual information.

Autonomous Vehicles

Robustness to Out-of-Distribution Inputs via Task-Aware Generative Uncertainty

no code implementations27 Dec 2018 Rowan McAllister, Gregory Kahn, Jeff Clune, Sergey Levine

Our method estimates an uncertainty measure about the model's prediction, taking into account an explicit (generative) model of the observation distribution to handle out-of-distribution inputs.

Unsupervised Exploration with Deep Model-Based Reinforcement Learning

no code implementations27 Sep 2018 Kurtland Chua, Rowan Mcallister, Roberto Calandra, Sergey Levine

We show that both challenges can be addressed by representing model-uncertainty, which can both guide exploration in the unsupervised phase and ensure that the errors in the model are not exploited by the planner in the goal-directed phase.

Model-based Reinforcement Learning reinforcement-learning +2

Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models

10 code implementations NeurIPS 2018 Kurtland Chua, Roberto Calandra, Rowan Mcallister, Sergey Levine

Model-based reinforcement learning (RL) algorithms can attain excellent sample efficiency, but often lag behind the best model-free algorithms in terms of asymptotic performance.

Deep Reinforcement Learning Model-based Reinforcement Learning +2

Data-Efficient Reinforcement Learning in Continuous State-Action Gaussian-POMDPs

no code implementations NeurIPS 2017 Rowan Mcallister, Carl Edward Rasmussen

This enables data-efficient learning under significant observation noise, outperforming more naive methods such as post-hoc application of a filter to policies optimised by the original (unfiltered) PILCO algorithm.

reinforcement-learning Reinforcement Learning +1

Data-Efficient Reinforcement Learning in Continuous-State POMDPs

no code implementations8 Feb 2016 Rowan McAllister, Carl Edward Rasmussen

We present a data-efficient reinforcement learning algorithm resistant to observation noise.

reinforcement-learning Reinforcement Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.