Search Results for author: Ajay Mandlekar

Found 24 papers, 7 papers with code

Voyager: An Open-Ended Embodied Agent with Large Language Models

1 code implementation25 May 2023 Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, Anima Anandkumar

We introduce Voyager, the first LLM-powered embodied lifelong learning agent in Minecraft that continuously explores the world, acquires diverse skills, and makes novel discoveries without human intervention.

What Matters in Learning from Offline Human Demonstrations for Robot Manipulation

1 code implementation6 Aug 2021 Ajay Mandlekar, Danfei Xu, Josiah Wong, Soroush Nasiriany, Chen Wang, Rohun Kulkarni, Li Fei-Fei, Silvio Savarese, Yuke Zhu, Roberto Martín-Martín

Based on the study, we derive a series of lessons including the sensitivity to different algorithmic design choices, the dependence on the quality of the demonstrations, and the variability based on the stopping criteria due to the different objectives in training and evaluation.

Imitation Learning reinforcement-learning +2

MoCoDA: Model-based Counterfactual Data Augmentation

1 code implementation20 Oct 2022 Silviu Pitis, Elliot Creager, Ajay Mandlekar, Animesh Garg

To this end, we show that (1) known local structure in the environment transitions is sufficient for an exponential reduction in the sample complexity of training a dynamics model, and (2) a locally factored dynamics model provably generalizes out-of-distribution to unseen states and actions.

counterfactual Data Augmentation +2

RoboTurk: A Crowdsourcing Platform for Robotic Skill Learning through Imitation

no code implementations7 Nov 2018 Ajay Mandlekar, Yuke Zhu, Animesh Garg, Jonathan Booher, Max Spero, Albert Tung, Julian Gao, John Emmons, Anchit Gupta, Emre Orbay, Silvio Savarese, Li Fei-Fei

Imitation Learning has empowered recent advances in learning robotic manipulation tasks by addressing shortcomings of Reinforcement Learning such as exploration and reward specification.

Imitation Learning

Scaling Robot Supervision to Hundreds of Hours with RoboTurk: Robotic Manipulation Dataset through Human Reasoning and Dexterity

no code implementations11 Nov 2019 Ajay Mandlekar, Jonathan Booher, Max Spero, Albert Tung, Anchit Gupta, Yuke Zhu, Animesh Garg, Silvio Savarese, Li Fei-Fei

We evaluate the quality of our platform, the diversity of demonstrations in our dataset, and the utility of our dataset via quantitative and qualitative analysis.

Robot Manipulation

IRIS: Implicit Reinforcement without Interaction at Scale for Learning Control from Offline Robot Manipulation Data

no code implementations13 Nov 2019 Ajay Mandlekar, Fabio Ramos, Byron Boots, Silvio Savarese, Li Fei-Fei, Animesh Garg, Dieter Fox

For simple short-horizon manipulation tasks with modest variation in task instances, offline learning from a small set of demonstrations can produce controllers that successfully solve the task.

Robot Manipulation

Learning to Generalize Across Long-Horizon Tasks from Human Demonstrations

no code implementations13 Mar 2020 Ajay Mandlekar, Danfei Xu, Roberto Martín-Martín, Silvio Savarese, Li Fei-Fei

In the second stage of GTI, we collect a small set of rollouts from the unconditioned stochastic policy of the first stage, and train a goal-directed agent to generalize to novel start and goal configurations.

Imitation Learning

Controlling Assistive Robots with Learned Latent Actions

no code implementations20 Sep 2019 Dylan P. Losey, Krishnan Srinivasan, Ajay Mandlekar, Animesh Garg, Dorsa Sadigh

Our insight is that we can make assistive robots easier for humans to control by leveraging latent actions.

Robotics

Human-in-the-Loop Imitation Learning using Remote Teleoperation

no code implementations12 Dec 2020 Ajay Mandlekar, Danfei Xu, Roberto Martín-Martín, Yuke Zhu, Li Fei-Fei, Silvio Savarese

We develop a simple and effective algorithm to train the policy iteratively on new data collected by the system that encourages the policy to learn how to traverse bottlenecks through the interventions.

Imitation Learning Robot Manipulation

Learning Multi-Arm Manipulation Through Collaborative Teleoperation

no code implementations12 Dec 2020 Albert Tung, Josiah Wong, Ajay Mandlekar, Roberto Martín-Martín, Yuke Zhu, Li Fei-Fei, Silvio Savarese

To address these challenges, we present Multi-Arm RoboTurk (MART), a multi-user data collection platform that allows multiple remote users to simultaneously teleoperate a set of robotic arms and collect demonstrations for multi-arm tasks.

Imitation Learning

Generalization Through Hand-Eye Coordination: An Action Space for Learning Spatially-Invariant Visuomotor Control

no code implementations28 Feb 2021 Chen Wang, Rui Wang, Ajay Mandlekar, Li Fei-Fei, Silvio Savarese, Danfei Xu

Key to such capability is hand-eye coordination, a cognitive ability that enables humans to adaptively direct their movements at task-relevant objects and be invariant to the objects' absolute spatial location.

Imitation Learning Zero-shot Generalization

S4RL: Surprisingly Simple Self-Supervision for Offline Reinforcement Learning

no code implementations10 Mar 2021 Samarth Sinha, Ajay Mandlekar, Animesh Garg

Offline reinforcement learning proposes to learn policies from large collected datasets without interacting with the physical environment.

Autonomous Driving D4RL +6

Error-Aware Imitation Learning from Teleoperation Data for Mobile Manipulation

no code implementations9 Dec 2021 Josiah Wong, Albert Tung, Andrey Kurenkov, Ajay Mandlekar, Li Fei-Fei, Silvio Savarese, Roberto Martín-Martín

Doing this is challenging for two reasons: on the data side, current interfaces make collecting high-quality human demonstrations difficult, and on the learning side, policies trained on limited data can suffer from covariate shift when deployed.

Imitation Learning Navigate

Learning and Retrieval from Prior Data for Skill-based Imitation Learning

no code implementations20 Oct 2022 Soroush Nasiriany, Tian Gao, Ajay Mandlekar, Yuke Zhu

Imitation learning offers a promising path for robots to learn general-purpose behaviors, but traditionally has exhibited limited scalability due to high data supervision requirements and brittle generalization.

Data Augmentation Imitation Learning +2

Active Task Randomization: Learning Robust Skills via Unsupervised Generation of Diverse and Feasible Tasks

no code implementations11 Nov 2022 Kuan Fang, Toki Migimatsu, Ajay Mandlekar, Li Fei-Fei, Jeannette Bohg

ATR selects suitable tasks, which consist of an initial environment state and manipulation goal, for learning robust skills by balancing the diversity and feasibility of the tasks.

Imitating Task and Motion Planning with Visuomotor Transformers

no code implementations25 May 2023 Murtaza Dalal, Ajay Mandlekar, Caelan Garrett, Ankur Handa, Ruslan Salakhutdinov, Dieter Fox

In this work, we show that the combination of large-scale datasets generated by TAMP supervisors and flexible Transformer models to fit them is a powerful paradigm for robot manipulation.

Imitation Learning Motion Planning +2

Human-in-the-Loop Task and Motion Planning for Imitation Learning

no code implementations24 Oct 2023 Ajay Mandlekar, Caelan Garrett, Danfei Xu, Dieter Fox

Finally, we collected 2. 1K demos with HITL-TAMP across 12 contact-rich, long-horizon tasks and show that the system often produces near-perfect agents.

Imitation Learning Motion Planning +1

MimicGen: A Data Generation System for Scalable Robot Learning using Human Demonstrations

no code implementations26 Oct 2023 Ajay Mandlekar, Soroush Nasiriany, Bowen Wen, Iretiayo Akinola, Yashraj Narang, Linxi Fan, Yuke Zhu, Dieter Fox

Imitation learning from a large set of human demonstrations has proved to be an effective paradigm for building capable robot agents.

Imitation Learning

NOD-TAMP: Multi-Step Manipulation Planning with Neural Object Descriptors

no code implementations2 Nov 2023 Shuo Cheng, Caelan Garrett, Ajay Mandlekar, Danfei Xu

Developing intelligent robots for complex manipulation tasks in household and factory settings remains challenging due to long-horizon tasks, contact-rich manipulation, and the need to generalize across a wide variety of object shapes and scene layouts.

Motion Planning Object +1

Cannot find the paper you are looking for? You can Submit a new open access paper.