no code implementations • 20 Nov 2024 • Kenneth Shaw, Yulong Li, Jiahui Yang, Mohan Kumar Srirama, Ray Liu, Haoyu Xiong, Russell Mendonca, Deepak Pathak
To address this, we introduce Bidex, an extremely dexterous, low-cost, low-latency and portable bimanual dexterous teleoperation system which relies on motion capture gloves and teacher arms.
no code implementations • 30 Sep 2024 • Russell Mendonca, Emmanuel Panov, Bernadette Bucher, Jiuguang Wang, Deepak Pathak
We present a fully autonomous real-world RL framework for mobile manipulation that can learn policies without extensive instrumentation or human supervision.
no code implementations • 9 Sep 2024 • Murtaza Dalal, Jiahui Yang, Russell Mendonca, Youssef Khaky, Ruslan Salakhutdinov, Deepak Pathak
We perform a thorough evaluation of our method on 64 motion planning tasks across four diverse environments with randomized poses, scenes and obstacles, in the real world, demonstrating an improvement of 23%, 17% and 79% motion planning success rate over state of the art sampling, optimization and learning based planning methods.
1 code implementation • 11 Jul 2024 • Mihir Prabhudesai, Russell Mendonca, Zheyang Qin, Katerina Fragkiadaki, Deepak Pathak
We show that backpropagating gradients from these reward models to a video diffusion model can allow for compute and sample efficient alignment of the video diffusion model.
no code implementations • 25 Jan 2024 • Haoyu Xiong, Russell Mendonca, Kenneth Shaw, Deepak Pathak
We also develop a low-cost mobile manipulation hardware platform capable of safe and autonomous online adaptation in unstructured environments with a cost of around 20, 000 USD.
no code implementations • 5 Sep 2023 • Kevin Gmelin, Shikhar Bahl, Russell Mendonca, Deepak Pathak
Agents that are aware of the separation between themselves and their environments can leverage this understanding to form effective representations of visual input.
no code implementations • 21 Aug 2023 • Russell Mendonca, Shikhar Bahl, Deepak Pathak
We propose an approach for robots to efficiently learn manipulation skills using only a handful of real-world interaction trajectories from many different settings.
no code implementations • CVPR 2023 • Shikhar Bahl, Russell Mendonca, Lili Chen, Unnat Jain, Deepak Pathak
Utilizing internet videos of human behavior, we train a visual affordance model that estimates where and how in the scene a human is likely to interact.
no code implementations • 13 Feb 2023 • Russell Mendonca, Shikhar Bahl, Deepak Pathak
Robotic agents that operate autonomously in the real world need to continuously explore their environment and learn from the data collected, with minimal human supervision.
2 code implementations • NeurIPS 2021 • Russell Mendonca, Oleh Rybkin, Kostas Daniilidis, Danijar Hafner, Deepak Pathak
How can artificial agents learn to solve many diverse tasks in complex visual environments in the absence of any supervision?
no code implementations • ICML Workshop URL 2021 • Russell Mendonca, Oleh Rybkin, Kostas Daniilidis, Danijar Hafner, Deepak Pathak
How can an artificial agent learn to solve a wide range of tasks in a complex visual environment in the absence of external supervision?
no code implementations • 12 Jun 2020 • Russell Mendonca, Xinyang Geng, Chelsea Finn, Sergey Levine
Our method is based on a simple insight: we recognize that dynamics models can be adapted efficiently and consistently with off-policy data, more easily than policies and value functions.
no code implementations • 25 Sep 2019 • Russell Mendonca, Xinyang Geng, Chelsea Finn, Sergey Levine
Reinforcement learning algorithms can acquire policies for complex tasks automatically, however the number of samples required to learn a diverse set of skills can be prohibitively large.
no code implementations • ICLR 2019 • Rosen Kralev, Russell Mendonca, Alvin Zhang, Tianhe Yu, Abhishek Gupta, Pieter Abbeel, Sergey Levine, Chelsea Finn
Meta-reinforcement learning aims to learn fast reinforcement learning (RL) procedures that can be applied to new tasks or environments.
no code implementations • NeurIPS 2019 • Russell Mendonca, Abhishek Gupta, Rosen Kralev, Pieter Abbeel, Sergey Levine, Chelsea Finn
Reinforcement learning (RL) algorithms have demonstrated promising results on complex tasks, yet often require impractical numbers of samples since they learn from scratch.
2 code implementations • NeurIPS 2018 • Abhishek Gupta, Russell Mendonca, Yuxuan Liu, Pieter Abbeel, Sergey Levine
Exploration is a fundamental challenge in reinforcement learning (RL).