Search Results for author: Anirudh Vemula

Found 11 papers, 10 papers with code

Social Attention: Modeling Attention in Human Crowds

2 code implementations12 Oct 2017 Anirudh Vemula, Katharina Muelling, Jean Oh

In this work, we propose Social Attention, a novel trajectory prediction model that captures the relative importance of each person when navigating in the crowd, irrespective of their proximity.

Navigate Trajectory Prediction

Path Planning in Dynamic Environments with Adaptive Dimensionality

1 code implementation22 May 2016 Anirudh Vemula, Katharina Muelling, Jean Oh

In this paper, we apply the idea of adaptive dimensionality to speed up path planning in dynamic environments for a robot with no assumptions on its dynamic model.

Robotics

Provably Efficient Imitation Learning from Observation Alone

1 code implementation27 May 2019 Wen Sun, Anirudh Vemula, Byron Boots, J. Andrew Bagnell

We design a new model-free algorithm for ILFO, Forward Adversarial Imitation Learning (FAIL), which learns a sequence of time-dependent policies by minimizing an Integral Probability Metric between the observation distributions of the expert policy and the learner.

Imitation Learning OpenAI Gym

TRON: A Fast Solver for Trajectory Optimization with Non-Smooth Cost Functions

1 code implementation31 Mar 2020 Anirudh Vemula, J. Andrew Bagnell

TRON achieves this by exploiting the structure of the objective to adaptively smooth the cost function, resulting in a sequence of objectives that can be efficiently optimized.

Robotics Systems and Control Systems and Control

Exploration in Action Space

1 code implementation31 Mar 2020 Anirudh Vemula, Wen Sun, J. Andrew Bagnell

Parameter space exploration methods with black-box optimization have recently been shown to outperform state-of-the-art approaches in continuous control reinforcement learning domains.

Continuous Control reinforcement-learning +1

Planning and Execution using Inaccurate Models with Provable Guarantees

1 code implementation9 Mar 2020 Anirudh Vemula, Yash Oza, J. Andrew Bagnell, Maxim Likhachev

In this paper, we propose CMAX an approach for interleaving planning and execution.

Contrasting Exploration in Parameter and Action Space: A Zeroth-Order Optimization Perspective

1 code implementation31 Jan 2019 Anirudh Vemula, Wen Sun, J. Andrew Bagnell

Black-box optimizers that explore in parameter space have often been shown to outperform more sophisticated action space exploration methods developed specifically for the reinforcement learning problem.

Continuous Control regression +2

CMAX++ : Leveraging Experience in Planning and Execution using Inaccurate Models

1 code implementation21 Sep 2020 Anirudh Vemula, J. Andrew Bagnell, Maxim Likhachev

In this paper we propose CMAX++, an approach that leverages real-world experience to improve the quality of resulting plans over successive repetitions of a robotic task.

Friction Robot Navigation

Learning Optimal Decision Making for an Industrial Truck Unloading Robot using Minimal Simulator Runs

no code implementations13 Mar 2021 Manash Pratim Das, Anirudh Vemula, Mayank Pathak, Sandip Aine, Maxim Likhachev

In this work, we investigate how would the robot with the help of a simulator, learn to maximize the number of boxes unloaded by each action.

Decision Making Multi-class Classification

On the Effectiveness of Iterative Learning Control

1 code implementation17 Nov 2021 Anirudh Vemula, Wen Sun, Maxim Likhachev, J. Andrew Bagnell

However, there is little prior theoretical work that explains the effectiveness of ILC even in the presence of large modeling errors, where optimal control methods using the misspecified model (MM) often perform poorly.

Industrial Robots

The Virtues of Laziness in Model-based RL: A Unified Objective and Algorithms

1 code implementation1 Mar 2023 Anirudh Vemula, Yuda Song, Aarti Singh, J. Andrew Bagnell, Sanjiban Choudhury

We propose a novel approach to addressing two fundamental challenges in Model-based Reinforcement Learning (MBRL): the computational expense of repeatedly finding a good policy in the learned model, and the objective mismatch between model fitting and policy computation.

Computational Efficiency Model-based Reinforcement Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.