no code implementations • 31 Jan 2024 • Jiezhi Yang, Khushi Desai, Charles Packer, Harshil Bhatia, Nicholas Rhinehart, Rowan Mcallister, Joseph Gonzalez
We propose CARFF, a method for predicting future 3D scenes given past observations.
no code implementations • 27 Jan 2023 • Fernando Castañeda, Haruki Nishimura, Rowan Mcallister, Koushil Sreenath, Adrien Gaidon
Learning-based control approaches have shown great promise in performing complex tasks directly from high-dimensional perception data for real robotic systems.
1 code implementation • 4 Oct 2022 • Haruki Nishimura, Jean Mercat, Blake Wulfe, Rowan Mcallister, Adrien Gaidon
Robust planning in interactive scenarios requires predicting the uncertain future to make risk-aware decisions.
no code implementations • 28 Apr 2022 • Rowan Mcallister, Blake Wulfe, Jean Mercat, Logan Ellis, Sergey Levine, Adrien Gaidon
Autonomous vehicle software is typically structured as a modular pipeline of individual components (e. g., perception, prediction, and planning) to help separate concerns into interpretable sub-tasks.
no code implementations • ICLR 2022 • Blake Wulfe, Ashwin Balakrishna, Logan Ellis, Jean Mercat, Rowan Mcallister, Adrien Gaidon
The ability to learn reward functions plays an important role in enabling the deployment of intelligent agents in the real world.
1 code implementation • 26 Apr 2021 • Boris Ivanovic, Kuan-Hui Lee, Pavel Tokmakov, Blake Wulfe, Rowan Mcallister, Adrien Gaidon, Marco Pavone
Reasoning about the future behavior of other agents is critical to safe robot navigation.
1 code implementation • 21 Apr 2021 • Nicholas Rhinehart, Jeff He, Charles Packer, Matthew A. Wright, Rowan Mcallister, Joseph E. Gonzalez, Sergey Levine
Humans have a remarkable ability to make decisions by accurately reasoning about future events, including the future behaviors and states of mind of other agents.
no code implementations • NeurIPS 2021 • Tim G. J. Rudner, Vitchyr H. Pong, Rowan Mcallister, Yarin Gal, Sergey Levine
While reinforcement learning algorithms provide automated acquisition of optimal policies, practical application of such methods requires a number of design decisions, such as manually designing reward functions that not only define the task, but also provide sufficient shaping to accomplish it.
2 code implementations • ICML 2020 • Angelos Filos, Panagiotis Tigas, Rowan Mcallister, Nicholas Rhinehart, Sergey Levine, Yarin Gal
Out-of-training-distribution (OOD) scenarios are a common challenge of learning agents at deployment, typically leading to arbitrary deductions and poorly-informed decisions.
2 code implementations • 18 Jun 2020 • Amy Zhang, Rowan McAllister, Roberto Calandra, Yarin Gal, Sergey Levine
We study how representation learning can accelerate reinforcement learning from rich observations, such as images, without relying either on domain knowledge or pixel-reconstruction.
2 code implementations • 23 Apr 2020 • Suneel Belkhale, Rachel Li, Gregory Kahn, Rowan Mcallister, Roberto Calandra, Sergey Levine
Our experiments demonstrate that our online adaptation approach outperforms non-adaptive methods on a series of challenging suspended payload transportation tasks.
1 code implementation • 31 May 2019 • Brijen Thananjeyan, Ashwin Balakrishna, Ugo Rosolia, Felix Li, Rowan Mcallister, Joseph E. Gonzalez, Sergey Levine, Francesco Borrelli, Ken Goldberg
Reinforcement learning (RL) for robotics is challenging due to the difficulty in hand-engineering a dense cost function, which can lead to unintended behavior, and dynamical uncertainty, which makes exploration and constraint satisfaction challenging.
Model-based Reinforcement Learning reinforcement-learning +2
2 code implementations • ICCV 2019 • Nicholas Rhinehart, Rowan Mcallister, Kris Kitani, Sergey Levine
For autonomous vehicles (AVs) to behave appropriately on roads populated by human-driven vehicles, they must be able to reason about the uncertain intentions and decisions of other drivers from rich perceptual information.
no code implementations • 27 Dec 2018 • Rowan McAllister, Gregory Kahn, Jeff Clune, Sergey Levine
Our method estimates an uncertainty measure about the model's prediction, taking into account an explicit (generative) model of the observation distribution to handle out-of-distribution inputs.
1 code implementation • ICLR 2020 • Nicholas Rhinehart, Rowan Mcallister, Sergey Levine
Yet, reward functions that evoke desirable behavior are often difficult to specify.
no code implementations • 27 Sep 2018 • Kurtland Chua, Rowan Mcallister, Roberto Calandra, Sergey Levine
We show that both challenges can be addressed by representing model-uncertainty, which can both guide exploration in the unsupervised phase and ensure that the errors in the model are not exploited by the planner in the goal-directed phase.
Model-based Reinforcement Learning reinforcement-learning +2
10 code implementations • NeurIPS 2018 • Kurtland Chua, Roberto Calandra, Rowan Mcallister, Sergey Levine
Model-based reinforcement learning (RL) algorithms can attain excellent sample efficiency, but often lag behind the best model-free algorithms in terms of asymptotic performance.
Deep Reinforcement Learning Model-based Reinforcement Learning +2
no code implementations • NeurIPS 2017 • Rowan Mcallister, Carl Edward Rasmussen
This enables data-efficient learning under significant observation noise, outperforming more naive methods such as post-hoc application of a filter to policies optimised by the original (unfiltered) PILCO algorithm.
no code implementations • 8 Feb 2016 • Rowan McAllister, Carl Edward Rasmussen
We present a data-efficient reinforcement learning algorithm resistant to observation noise.