Search Results for author: Siddhartha Srinivasa

Found 19 papers, 8 papers with code

Leveraging Experience in Lazy Search

no code implementations10 Oct 2021 Mohak Bhardwaj, Sanjiban Choudhury, Byron Boots, Siddhartha Srinivasa

If new search problems are sufficiently similar to problems solved during training, the learned policy will choose a good edge evaluation ordering and solve the motion planning problem quickly.

Imitation Learning Motion Planning

Faster Policy Learning with Continuous-Time Gradients

3 code implementations12 Dec 2020 Samuel Ainsworth, Kendall Lowrey, John Thickstun, Zaid Harchaoui, Siddhartha Srinivasa

We study the estimation of policy gradients for continuous-time systems with known dynamics.

Amodal 3D Reconstruction for Robotic Manipulation via Stability and Connectivity

1 code implementation28 Sep 2020 William Agnew, Christopher Xie, Aaron Walsman, Octavian Murad, Caelen Wang, Pedro Domingos, Siddhartha Srinivasa

By using these priors over the physical properties of objects, our system improves reconstruction quality not just by standard visual metrics, but also performance of model-based control on a variety of robotics manipulation tasks in challenging, cluttered environments.

3D Object Reconstruction 3D Reconstruction

Mo' States Mo' Problems: Emergency Stop Mechanisms from Observation

1 code implementation NeurIPS 2019 Samuel Ainsworth, Matt Barnes, Siddhartha Srinivasa

In many environments, only a relatively small subset of the complete state space is necessary in order to accomplish a given task.

reinforcement-learning

Leveraging Experience in Lazy Search

no code implementations16 Jul 2019 Mohak Bhardwaj, Sanjiban Choudhury, Byron Boots, Siddhartha Srinivasa

If new search problems are sufficiently similar to problems solved during training, the learned policy will choose a good edge evaluation ordering and solve the motion planning problem quickly.

Imitation Learning Motion Planning

Imitation Learning as $f$-Divergence Minimization

no code implementations30 May 2019 Liyiming Ke, Sanjiban Choudhury, Matt Barnes, Wen Sun, Gilwoo Lee, Siddhartha Srinivasa

We show that the state-of-the-art methods such as GAIL and behavior cloning, due to their choice of loss function, often incorrectly interpolate between such modes.

Imitation Learning

Improving Robot Success Detection using Static Object Data

1 code implementation2 Apr 2019 Rosario Scalise, Jesse Thomason, Yonatan Bisk, Siddhartha Srinivasa

We collect over 13 hours of egocentric manipulation data for training a model to reason about whether a robot successfully placed unseen objects in or on one another.

Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation

1 code implementation CVPR 2019 Liyiming Ke, Xiujun Li, Yonatan Bisk, Ari Holtzman, Zhe Gan, Jingjing Liu, Jianfeng Gao, Yejin Choi, Siddhartha Srinivasa

We present the Frontier Aware Search with backTracking (FAST) Navigator, a general framework for action decoding, that achieves state-of-the-art results on the Room-to-Room (R2R) Vision-and-Language navigation challenge of Anderson et.

Vision and Language Navigation Vision-Language Navigation

The Assistive Multi-Armed Bandit

1 code implementation24 Jan 2019 Lawrence Chan, Dylan Hadfield-Menell, Siddhartha Srinivasa, Anca Dragan

Learning preferences implicit in the choices humans make is a well studied problem in both economics and computer science.

Multi-Armed Bandits

Learning Configuration Space Belief Model from Collision Checks for Motion Planning

no code implementations22 Jan 2019 Sumit Kumar, Shushman Choudhary, Siddhartha Srinivasa

Our aim is to reduce the expected number of collision checks by creating a belief model of the configuration space using results from collision tests.

Motion Planning

Sample-Efficient Learning of Nonprehensile Manipulation Policies via Physics-Based Informed State Distributions

no code implementations24 Oct 2018 Lerrel Pinto, Aditya Mandalika, Brian Hou, Siddhartha Srinivasa

This paper proposes a sample-efficient yet simple approach to learning closed-loop policies for nonprehensile manipulation.

Balancing Shared Autonomy with Human-Robot Communication

no code implementations20 May 2018 Rosario Scalise, Yonatan Bisk, Maxwell Forbes, Daqing Yi, Yejin Choi, Siddhartha Srinivasa

Robotic agents that share autonomy with a human should leverage human domain knowledge and account for their preferences when completing a task.

Recurrent Predictive State Policy Networks

2 code implementations ICML 2018 Ahmed Hefny, Zita Marinho, Wen Sun, Siddhartha Srinivasa, Geoffrey Gordon

Predictive state policy networks consist of a recursive filter, which keeps track of a belief about the state of the environment, and a reactive policy that directly maps beliefs to actions, to maximize the cumulative reward.

OpenAI Gym

Trust-Aware Decision Making for Human-Robot Collaboration: Model Learning and Planning

no code implementations12 Jan 2018 Min Chen, Stefanos Nikolaidis, Harold Soh, David Hsu, Siddhartha Srinivasa

The trust-POMDP model provides a principled approach for the robot to (i) infer the trust of a human teammate through interaction, (ii) reason about the effect of its own actions on human trust, and (iii) choose actions that maximize team performance over the long term.

Decision Making

Bayesian Active Edge Evaluation on Expensive Graphs

no code implementations20 Nov 2017 Sanjiban Choudhury, Siddhartha Srinivasa, Sebastian Scherer

We are interested in planning algorithms that actively infer the underlying structure of the valid configuration space during planning in order to find solutions with minimal effort.

Active Learning Motion Planning

Near-Optimal Edge Evaluation in Explicit Generalized Binomial Graphs

1 code implementation NeurIPS 2017 Sanjiban Choudhury, Shervin Javdani, Siddhartha Srinivasa, Sebastian Scherer

By leveraging this property, we are able to significantly reduce computational complexity from exponential to linear in the number of edges.

Robotics

Cannot find the paper you are looking for? You can Submit a new open access paper.