Search Results for author: J. Andrew Bagnell

Found 36 papers, 9 papers with code

A Critique of Strictly Batch Imitation Learning

no code implementations5 Oct 2021 Gokul Swamy, Sanjiban Choudhury, J. Andrew Bagnell, Zhiwei Steven Wu

Recent work by Jarrett et al. attempts to frame the problem of offline imitation learning (IL) as one of learning a joint energy-based model, with the hope of out-performing standard behavioral cloning.

Imitation Learning

Of Moments and Matching: A Game-Theoretic Framework for Closing the Imitation Gap

1 code implementation4 Mar 2021 Gokul Swamy, Sanjiban Choudhury, J. Andrew Bagnell, Zhiwei Steven Wu

We provide a unifying view of a large family of previous imitation learning algorithms through the lens of moment matching.

Imitation Learning

Feedback in Imitation Learning: The Three Regimes of Covariate Shift

no code implementations4 Feb 2021 Jonathan Spencer, Sanjiban Choudhury, Arun Venkatraman, Brian Ziebart, J. Andrew Bagnell

The learner often comes to rely on features that are strongly predictive of decisions, but are subject to strong covariate shift.

Causal Inference Decision Making +1

CMAX++ : Leveraging Experience in Planning and Execution using Inaccurate Models

1 code implementation21 Sep 2020 Anirudh Vemula, J. Andrew Bagnell, Maxim Likhachev

In this paper we propose CMAX++, an approach that leverages real-world experience to improve the quality of resulting plans over successive repetitions of a robotic task.

Robot Navigation

TRON: A Fast Solver for Trajectory Optimization with Non-Smooth Cost Functions

1 code implementation31 Mar 2020 Anirudh Vemula, J. Andrew Bagnell

TRON achieves this by exploiting the structure of the objective to adaptively smooth the cost function, resulting in a sequence of objectives that can be efficiently optimized.

Robotics Systems and Control Systems and Control

Exploration in Action Space

1 code implementation31 Mar 2020 Anirudh Vemula, Wen Sun, J. Andrew Bagnell

Parameter space exploration methods with black-box optimization have recently been shown to outperform state-of-the-art approaches in continuous control reinforcement learning domains.

Continuous Control

Planning and Execution using Inaccurate Models with Provable Guarantees

1 code implementation9 Mar 2020 Anirudh Vemula, Yash Oza, J. Andrew Bagnell, Maxim Likhachev

In this paper, we propose CMAX an approach for interleaving planning and execution.

Provably Efficient Imitation Learning from Observation Alone

1 code implementation27 May 2019 Wen Sun, Anirudh Vemula, Byron Boots, J. Andrew Bagnell

We design a new model-free algorithm for ILFO, Forward Adversarial Imitation Learning (FAIL), which learns a sequence of time-dependent policies by minimizing an Integral Probability Metric between the observation distributions of the expert policy and the learner.

Imitation Learning OpenAI Gym

Contrasting Exploration in Parameter and Action Space: A Zeroth-Order Optimization Perspective

1 code implementation31 Jan 2019 Anirudh Vemula, Wen Sun, J. Andrew Bagnell

Black-box optimizers that explore in parameter space have often been shown to outperform more sophisticated action space exploration methods developed specifically for the reinforcement learning problem.

Continuous Control

An Algorithmic Perspective on Imitation Learning

no code implementations16 Nov 2018 Takayuki Osa, Joni Pajarinen, Gerhard Neumann, J. Andrew Bagnell, Pieter Abbeel, Jan Peters

This process of learning from demonstrations, and the study of algorithms to do so, is called imitation learning.

Imitation Learning Learning Theory

Truncated Horizon Policy Search: Combining Reinforcement Learning & Imitation Learning

no code implementations ICLR 2018 Wen Sun, J. Andrew Bagnell, Byron Boots

In this paper, we propose to combine imitation and reinforcement learning via the idea of reward shaping using an oracle.

Imitation Learning

Dual Policy Iteration

no code implementations NeurIPS 2018 Wen Sun, Geoffrey J. Gordon, Byron Boots, J. Andrew Bagnell

Recently, a novel class of Approximate Policy Iteration (API) algorithms have demonstrated impressive practical performance (e. g., ExIt from [2], AlphaGo-Zero from [27]).

Continuous Control

Log-DenseNet: How to Sparsify a DenseNet

1 code implementation ICLR 2018 Hanzhang Hu, Debadeepta Dey, Allison Del Giorno, Martial Hebert, J. Andrew Bagnell

Skip connections are increasingly utilized by deep neural networks to improve accuracy and cost-efficiency.

Semantic Segmentation

Predictive-State Decoders: Encoding the Future into Recurrent Networks

no code implementations NeurIPS 2017 Arun Venkatraman, Nicholas Rhinehart, Wen Sun, Lerrel Pinto, Martial Hebert, Byron Boots, Kris M. Kitani, J. Andrew Bagnell

We seek to combine the advantages of RNNs and PSRs by augmenting existing state-of-the-art recurrent neural networks with Predictive-State Decoders (PSDs), which add supervision to the network's internal state representation to target predicting future observations.

Imitation Learning

Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing

no code implementations22 Aug 2017 Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell

Experimentally, the adaptive weights induce more competitive anytime predictions on multiple recognition data-sets and models than non-adaptive approaches including weighing all losses equally.

Deeply AggreVaTeD: Differentiable Imitation Learning for Sequential Prediction

no code implementations ICML 2017 Wen Sun, Arun Venkatraman, Geoffrey J. Gordon, Byron Boots, J. Andrew Bagnell

We demonstrate that AggreVaTeD --- a policy gradient extension of the Imitation Learning (IL) approach of (Ross & Bagnell, 2014) --- can leverage such an oracle to achieve faster and better solutions with less training data than a less-informed Reinforcement Learning (RL) technique.

Decision Making Dependency Parsing +1

Gradient Boosting on Stochastic Data Streams

no code implementations1 Mar 2017 Hanzhang Hu, Wen Sun, Arun Venkatraman, Martial Hebert, J. Andrew Bagnell

To generalize from batch to online, we first introduce the definition of online weak learning edge with which for strongly convex and smooth loss functions, we present an algorithm, Streaming Gradient Boosting (SGB) with exponential shrinkage guarantees in the number of weak learners.

A Discriminative Framework for Anomaly Detection in Large Videos

no code implementations28 Sep 2016 Allison Del Giorno, J. Andrew Bagnell, Martial Hebert

We address an anomaly detection setting in which training sequences are unavailable and anomalies are scored independently of temporal ordering.

Anomaly Detection Density Estimation

Learning Transferable Policies for Monocular Reactive MAV Control

no code implementations1 Aug 2016 Shreyansh Daftry, J. Andrew Bagnell, Martial Hebert

The ability to transfer knowledge gained in previous tasks into new contexts is one of the most important mechanisms of human learning.

Introspective Perception: Learning to Predict Failures in Vision Systems

no code implementations28 Jul 2016 Shreyansh Daftry, Sam Zeng, J. Andrew Bagnell, Martial Hebert

As robots aspire for long-term autonomous operations in complex dynamic environments, the ability to reliably take mission-critical decisions in ambiguous situations becomes critical.

Learning to Filter with Predictive State Inference Machines

no code implementations30 Dec 2015 Wen Sun, Arun Venkatraman, Byron Boots, J. Andrew Bagnell

Latent state space models are a fundamental and widely used tool for modeling dynamical systems.

Predicting Multiple Structured Visual Interpretations

no code implementations ICCV 2015 Debadeepta Dey, Varun Ramakrishna, Martial Hebert, J. Andrew Bagnell

We present a simple approach for producing a small number of structured visual outputs which have high recall, for a variety of tasks including monocular pose estimation and semantic scene segmentation.

Pose Estimation Scene Segmentation

Solving Games with Functional Regret Estimation

no code implementations28 Nov 2014 Kevin Waugh, Dustin Morrill, J. Andrew Bagnell, Michael Bowling

We propose a novel online learning method for minimizing regret in large extensive-form games.

A Unified View of Large-scale Zero-sum Equilibrium Computation

no code implementations18 Nov 2014 Kevin Waugh, J. Andrew Bagnell

The task of computing approximate Nash equilibria in large zero-sum extensive-form games has received a tremendous amount of attention due mainly to the Annual Computer Poker Competition.

Visual Chunking: A List Prediction Framework for Region-Based Object Detection

no code implementations27 Oct 2014 Nicholas Rhinehart, Jiaji Zhou, Martial Hebert, J. Andrew Bagnell

We present an efficient algorithm with provable performance for building a high-quality list of detections from any candidate set of region-based proposals.

Chunking Object Detection

Efficient Feature Group Sequencing for Anytime Linear Prediction

no code implementations19 Sep 2014 Hanzhang Hu, Alexander Grubb, J. Andrew Bagnell, Martial Hebert

We theoretically guarantee that our algorithms achieve near-optimal linear predictions at each budget when a feature group is chosen.

Reinforcement and Imitation Learning via Interactive No-Regret Learning

no code implementations23 Jun 2014 Stephane Ross, J. Andrew Bagnell

Recent work has demonstrated that problems-- particularly imitation learning and structured prediction-- where a learner's predictions influence the input-distribution it is tested on can be naturally addressed by an interactive approach and analyzed using no-regret online learning.

Imitation Learning Structured Prediction

SpeedMachines: Anytime Structured Prediction

no code implementations2 Dec 2013 Alexander Grubb, Daniel Munoz, J. Andrew Bagnell, Martial Hebert

Structured prediction plays a central role in machine learning applications from computational biology to computer vision.

General Classification Scene Understanding +1

Computational Rationalization: The Inverse Equilibrium Problem

no code implementations15 Aug 2013 Kevin Waugh, Brian D. Ziebart, J. Andrew Bagnell

Modeling the purposeful behavior of imperfect agents from a small number of observations is a challenging task.

Learning Policies for Contextual Submodular Prediction

no code implementations11 May 2013 Stephane Ross, Jiaji Zhou, Yisong Yue, Debadeepta Dey, J. Andrew Bagnell

Many prediction domains, such as ad placement, recommendation, trajectory prediction, and document summarization, require predicting a set or list of options.

Document Summarization News Recommendation +1

A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning

5 code implementations2 Nov 2010 Stephane Ross, Geoffrey J. Gordon, J. Andrew Bagnell

Sequential prediction problems such as imitation learning, where future observations depend on previous predictions (actions), violate the common i. i. d.

Imitation Learning Structured Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.