Search Results for author: Russell Mendonca

Found 16 papers, 3 papers with code

Bimanual Dexterity for Complex Tasks

no code implementations20 Nov 2024 Kenneth Shaw, Yulong Li, Jiahui Yang, Mohan Kumar Srirama, Ray Liu, Haoyu Xiong, Russell Mendonca, Deepak Pathak

To address this, we introduce Bidex, an extremely dexterous, low-cost, low-latency and portable bimanual dexterous teleoperation system which relies on motion capture gloves and teacher arms.

16k

Continuously Improving Mobile Manipulation with Autonomous Real-World RL

no code implementations30 Sep 2024 Russell Mendonca, Emmanuel Panov, Bernadette Bucher, Jiuguang Wang, Deepak Pathak

We present a fully autonomous real-world RL framework for mobile manipulation that can learn policies without extensive instrumentation or human supervision.

Neural MP: A Generalist Neural Motion Planner

no code implementations9 Sep 2024 Murtaza Dalal, Jiahui Yang, Russell Mendonca, Youssef Khaky, Ruslan Salakhutdinov, Deepak Pathak

We perform a thorough evaluation of our method on 64 motion planning tasks across four diverse environments with randomized poses, scenes and obstacles, in the real world, demonstrating an improvement of 23%, 17% and 79% motion planning success rate over state of the art sampling, optimization and learning based planning methods.

Motion Planning

Video Diffusion Alignment via Reward Gradients

1 code implementation11 Jul 2024 Mihir Prabhudesai, Russell Mendonca, Zheyang Qin, Katerina Fragkiadaki, Deepak Pathak

We show that backpropagating gradients from these reward models to a video diffusion model can allow for compute and sample efficient alignment of the video diffusion model.

Adaptive Mobile Manipulation for Articulated Objects In the Open World

no code implementations25 Jan 2024 Haoyu Xiong, Russell Mendonca, Kenneth Shaw, Deepak Pathak

We also develop a low-cost mobile manipulation hardware platform capable of safe and autonomous online adaptation in unstructured environments with a cost of around 20, 000 USD.

Efficient RL via Disentangled Environment and Agent Representations

no code implementations5 Sep 2023 Kevin Gmelin, Shikhar Bahl, Russell Mendonca, Deepak Pathak

Agents that are aware of the separation between themselves and their environments can leverage this understanding to form effective representations of visual input.

Structured World Models from Human Videos

no code implementations21 Aug 2023 Russell Mendonca, Shikhar Bahl, Deepak Pathak

We propose an approach for robots to efficiently learn manipulation skills using only a handful of real-world interaction trajectories from many different settings.

Affordances from Human Videos as a Versatile Representation for Robotics

no code implementations CVPR 2023 Shikhar Bahl, Russell Mendonca, Lili Chen, Unnat Jain, Deepak Pathak

Utilizing internet videos of human behavior, we train a visual affordance model that estimates where and how in the scene a human is likely to interact.

Imitation Learning

ALAN: Autonomously Exploring Robotic Agents in the Real World

no code implementations13 Feb 2023 Russell Mendonca, Shikhar Bahl, Deepak Pathak

Robotic agents that operate autonomously in the real world need to continuously explore their environment and learn from the data collected, with minimal human supervision.

Discovering and Achieving Goals via World Models

2 code implementations NeurIPS 2021 Russell Mendonca, Oleh Rybkin, Kostas Daniilidis, Danijar Hafner, Deepak Pathak

How can artificial agents learn to solve many diverse tasks in complex visual environments in the absence of any supervision?

Discovering and Achieving Goals with World Models

no code implementations ICML Workshop URL 2021 Russell Mendonca, Oleh Rybkin, Kostas Daniilidis, Danijar Hafner, Deepak Pathak

How can an artificial agent learn to solve a wide range of tasks in a complex visual environment in the absence of external supervision?

Meta-Reinforcement Learning Robust to Distributional Shift via Model Identification and Experience Relabeling

no code implementations12 Jun 2020 Russell Mendonca, Xinyang Geng, Chelsea Finn, Sergey Levine

Our method is based on a simple insight: we recognize that dynamics models can be adapted efficiently and consistently with off-policy data, more easily than policies and value functions.

Meta Reinforcement Learning reinforcement-learning +2

Consistent Meta-Reinforcement Learning via Model Identification and Experience Relabeling

no code implementations25 Sep 2019 Russell Mendonca, Xinyang Geng, Chelsea Finn, Sergey Levine

Reinforcement learning algorithms can acquire policies for complex tasks automatically, however the number of samples required to learn a diverse set of skills can be prohibitively large.

Meta Reinforcement Learning reinforcement-learning +2

Guided Meta-Policy Search

no code implementations NeurIPS 2019 Russell Mendonca, Abhishek Gupta, Rosen Kralev, Pieter Abbeel, Sergey Levine, Chelsea Finn

Reinforcement learning (RL) algorithms have demonstrated promising results on complex tasks, yet often require impractical numbers of samples since they learn from scratch.

continuous-control Continuous Control +6

Cannot find the paper you are looking for? You can Submit a new open access paper.