Search Results for author: Nicholas Rhinehart

Found 25 papers, 6 papers with code

Visual Chunking: A List Prediction Framework for Region-Based Object Detection

no code implementations27 Oct 2014 Nicholas Rhinehart, Jiaji Zhou, Martial Hebert, J. Andrew Bagnell

We present an efficient algorithm with provable performance for building a high-quality list of detections from any candidate set of region-based proposals.

Chunking object-detection +2

Learning Action Maps of Large Environments via First-Person Vision

no code implementations CVPR 2016 Nicholas Rhinehart, Kris M. Kitani

When people observe and interact with physical spaces, they are able to associate functionality to regions in the environment.

First-Person Activity Forecasting with Online Inverse Reinforcement Learning

no code implementations ICCV 2017 Nicholas Rhinehart, Kris M. Kitani

We address the problem of incrementally modeling and forecasting long-term goals of a first-person camera wearer: what the user will do, where they will go, and what goal they seek.

reinforcement-learning Reinforcement Learning (RL) +1

Predictive-State Decoders: Encoding the Future into Recurrent Networks

no code implementations NeurIPS 2017 Arun Venkatraman, Nicholas Rhinehart, Wen Sun, Lerrel Pinto, Martial Hebert, Byron Boots, Kris M. Kitani, J. Andrew Bagnell

We seek to combine the advantages of RNNs and PSRs by augmenting existing state-of-the-art recurrent neural networks with Predictive-State Decoders (PSDs), which add supervision to the network's internal state representation to target predicting future observations.

Imitation Learning

Learning Neural Parsers with Deterministic Differentiable Imitation Learning

no code implementations20 Jun 2018 Tanmay Shankar, Nicholas Rhinehart, Katharina Muelling, Kris M. Kitani

We introduce a novel deterministic policy gradient update, DRAG (i. e., DeteRministically AGgrevate) in the form of a deterministic actor-critic variant of AggreVaTeD, to train our neural parser.

Imitation Learning

Human-Interactive Subgoal Supervision for Efficient Inverse Reinforcement Learning

no code implementations22 Jun 2018 Xinlei Pan, Eshed Ohn-Bar, Nicholas Rhinehart, Yan Xu, Yilin Shen, Kris M. Kitani

The learning process is interactive, with a human expert first providing input in the form of full demonstrations along with some subgoal states.

reinforcement-learning Reinforcement Learning (RL)

R2P2: A ReparameteRized Pushforward Policy for Diverse, Precise Generative Path Forecasting

no code implementations ECCV 2018 Nicholas Rhinehart, Kris M. Kitani, Paul Vernaza

We propose a method to forecast a vehicle's ego-motion as a distribution over spatiotemporal paths, conditioned on features (e. g., from LIDAR and images) embedded in an overhead map.

Learning Gibbs-regularized GANs with variational discriminator reparameterization

no code implementations27 Sep 2018 Nicholas Rhinehart, Anqi Liu, Kihyuk Sohn, Paul Vernaza

We propose a novel approach to regularizing generative adversarial networks (GANs) leveraging learned {\em structured Gibbs distributions}.

Trajectory Forecasting

Directed-Info GAIL: Learning Hierarchical Policies from Unsegmented Demonstrations using Directed Information

no code implementations ICLR 2019 Arjun Sharma, Mohit Sharma, Nicholas Rhinehart, Kris M. Kitani

The use of imitation learning to learn a single policy for a complex task that has multiple modes or hierarchical structure can be challenging.

Imitation Learning

Generative Hybrid Representations for Activity Forecasting with No-Regret Learning

no code implementations CVPR 2020 Jiaqi Guan, Ye Yuan, Kris M. Kitani, Nicholas Rhinehart

Automatically reasoning about future human behaviors is a difficult problem but has significant practical applications to assistive systems.

PRECOG: PREdiction Conditioned On Goals in Visual Multi-Agent Settings

2 code implementations ICCV 2019 Nicholas Rhinehart, Rowan Mcallister, Kris Kitani, Sergey Levine

For autonomous vehicles (AVs) to behave appropriately on roads populated by human-driven vehicles, they must be able to reason about the uncertain intentions and decisions of other drivers from rich perceptual information.

Autonomous Vehicles

Inverting the Pose Forecasting Pipeline with SPF2: Sequential Pointcloud Forecasting for Sequential Pose Forecasting

no code implementations18 Mar 2020 Xinshuo Weng, Jianren Wang, Sergey Levine, Kris Kitani, Nicholas Rhinehart

Through experiments on a robotic manipulation dataset and two driving datasets, we show that SPFNet is effective for the SPF task, our forecast-then-detect pipeline outperforms the detect-then-forecast approaches to which we compared, and that pose forecasting performance improves with the addition of unlabeled data.

Decision Making Future prediction +1

Can Autonomous Vehicles Identify, Recover From, and Adapt to Distribution Shifts?

2 code implementations ICML 2020 Angelos Filos, Panagiotis Tigas, Rowan Mcallister, Nicholas Rhinehart, Sergey Levine, Yarin Gal

Out-of-training-distribution (OOD) scenarios are a common challenge of learning agents at deployment, typically leading to arbitrary deductions and poorly-informed decisions.

Autonomous Vehicles Out of Distribution (OOD) Detection

Conservative Safety Critics for Exploration

no code implementations ICLR 2021 Homanga Bharadhwaj, Aviral Kumar, Nicholas Rhinehart, Sergey Levine, Florian Shkurti, Animesh Garg

Safe exploration presents a major challenge in reinforcement learning (RL): when active data collection requires deploying partially trained policies, we must ensure that these policies avoid catastrophically unsafe regions, while still enabling trial and error learning.

Reinforcement Learning (RL) Safe Exploration

Parrot: Data-Driven Behavioral Priors for Reinforcement Learning

no code implementations ICLR 2021 Avi Singh, Huihan Liu, Gaoyue Zhou, Albert Yu, Nicholas Rhinehart, Sergey Levine

Reinforcement learning provides a general framework for flexible decision making and control, but requires extensive data collection for each new task that an agent needs to learn.

Decision Making reinforcement-learning +1

ViNG: Learning Open-World Navigation with Visual Goals

no code implementations17 Dec 2020 Dhruv Shah, Benjamin Eysenbach, Gregory Kahn, Nicholas Rhinehart, Sergey Levine

We propose a learning-based navigation system for reaching visually indicated goals and demonstrate this system on a real mobile robot platform.

Navigate reinforcement-learning +1

Rapid Exploration for Open-World Navigation with Latent Goal Models

no code implementations12 Apr 2021 Dhruv Shah, Benjamin Eysenbach, Gregory Kahn, Nicholas Rhinehart, Sergey Levine

We describe a robotic learning system for autonomous exploration and navigation in diverse, open-world environments.

Autonomous Navigation

Contingencies from Observations: Tractable Contingency Planning with Learned Behavior Models

1 code implementation21 Apr 2021 Nicholas Rhinehart, Jeff He, Charles Packer, Matthew A. Wright, Rowan Mcallister, Joseph E. Gonzalez, Sergey Levine

Humans have a remarkable ability to make decisions by accurately reasoning about future events, including the future behaviors and states of mind of other agents.

Intrinsic Control of Variational Beliefs in Dynamic Partially-Observed Visual Environments

no code implementations ICML Workshop URL 2021 Nicholas Rhinehart, Jenny Wang, Glen Berseth, John D Co-Reyes, Danijar Hafner, Chelsea Finn, Sergey Levine

We study this question in dynamic partially-observed environments, and argue that a compact and general learning objective is to minimize the entropy of the agent's state visitation estimated using a latent state-space model.

Information is Power: Intrinsic Control via Information Capture

no code implementations NeurIPS 2021 Nicholas Rhinehart, Jenny Wang, Glen Berseth, John D. Co-Reyes, Danijar Hafner, Chelsea Finn, Sergey Levine

We study this question in dynamic partially-observed environments, and argue that a compact and general learning objective is to minimize the entropy of the agent's state visitation estimated using a latent state-space model.

CARFF: Conditional Auto-encoded Radiance Field for 3D Scene Forecasting

no code implementations31 Jan 2024 Jiezhi Yang, Khushi Desai, Charles Packer, Harshil Bhatia, Nicholas Rhinehart, Rowan Mcallister, Joseph Gonzalez

We demonstrate the utility of our method in realistic scenarios using the CARLA driving simulator, where CARFF can be used to enable efficient trajectory and contingency planning in complex multi-agent autonomous driving scenarios involving visual occlusions.

Autonomous Driving Neural Rendering

Cannot find the paper you are looking for? You can Submit a new open access paper.