Search Results for author: Murtaza Dalal

Found 14 papers, 4 papers with code

Unlocking Generalization for Robotics via Modularity and Scale

no code implementations10 Mar 2025 Murtaza Dalal

Therefore, this thesis seeks to tackle the task of building generalist robot agents by integrating these components into one: combining modularity with large-scale learning for general purpose robot control.

Scene Generation

Local Policies Enable Zero-shot Long-horizon Manipulation

no code implementations29 Oct 2024 Murtaza Dalal, Min Liu, Walter Talbott, Chen Chen, Deepak Pathak, Jian Zhang, Ruslan Salakhutdinov

We transfer our local policies from simulation to reality and observe they can solve unseen long-horizon manipulation tasks with up to 8 stages with significant pose, object and scene configuration variation.

Motion Planning

Neural MP: A Generalist Neural Motion Planner

no code implementations9 Sep 2024 Murtaza Dalal, Jiahui Yang, Russell Mendonca, Youssef Khaky, Ruslan Salakhutdinov, Deepak Pathak

We perform a thorough evaluation of our method on 64 motion planning tasks across four diverse environments with randomized poses, scenes and obstacles, in the real world, demonstrating an improvement of 23%, 17% and 79% motion planning success rate over state of the art sampling, optimization and learning based planning methods.

Motion Planning

Plan-Seq-Learn: Language Model Guided RL for Solving Long Horizon Robotics Tasks

no code implementations2 May 2024 Murtaza Dalal, Tarun Chiruvolu, Devendra Chaplot, Ruslan Salakhutdinov

Large Language Models (LLMs) have been shown to be capable of performing high-level planning for long-horizon robotics tasks, yet existing methods require access to a pre-defined skill library (e. g. picking, placing, pulling, pushing, navigating).

Language Modeling Language Modelling +2

Imitating Task and Motion Planning with Visuomotor Transformers

no code implementations25 May 2023 Murtaza Dalal, Ajay Mandlekar, Caelan Garrett, Ankur Handa, Ruslan Salakhutdinov, Dieter Fox

In this work, we show that the combination of large-scale datasets generated by TAMP supervisors and flexible Transformer models to fit them is a powerful paradigm for robot manipulation.

Imitation Learning Motion Planning +2

AWAC: Accelerating Online Reinforcement Learning with Offline Datasets

6 code implementations16 Jun 2020 Ashvin Nair, Abhishek Gupta, Murtaza Dalal, Sergey Levine

If we can instead allow RL algorithms to effectively use previously collected data to aid the online learning process, such applications could be made substantially more practical: the prior data would provide a starting point that mitigates challenges due to exploration and sample complexity, while the online training enables the agent to perfect the desired skill.

reinforcement-learning Reinforcement Learning +1

Scalable Multi-Task Imitation Learning with Autonomous Improvement

no code implementations25 Feb 2020 Avi Singh, Eric Jang, Alexander Irpan, Daniel Kappler, Murtaza Dalal, Sergey Levine, Mohi Khansari, Chelsea Finn

In this work, we target this challenge, aiming to build an imitation learning system that can continuously improve through autonomous data collection, while simultaneously avoiding the explicit use of reinforcement learning, to maintain the stability, simplicity, and scalability of supervised imitation.

Imitation Learning reinforcement-learning +2

Autoregressive Models: What Are They Good For?

no code implementations17 Oct 2019 Murtaza Dalal, Alexander C. Li, Rohan Taori

Autoregressive (AR) models have become a popular tool for unsupervised learning, achieving state-of-the-art log likelihood estimates.

Translation

Visual Reinforcement Learning with Imagined Goals

2 code implementations NeurIPS 2018 Ashvin Nair, Vitchyr Pong, Murtaza Dalal, Shikhar Bahl, Steven Lin, Sergey Levine

For an autonomous agent to fulfill a wide range of user-specified goals at test time, it must be able to learn broadly applicable and general-purpose skill repertoires.

reinforcement-learning Reinforcement Learning +2

Composable Deep Reinforcement Learning for Robotic Manipulation

1 code implementation19 Mar 2018 Tuomas Haarnoja, Vitchyr Pong, Aurick Zhou, Murtaza Dalal, Pieter Abbeel, Sergey Levine

Second, we show that policies learned with soft Q-learning can be composed to create new policies, and that the optimality of the resulting policy can be bounded in terms of the divergence between the composed policies.

Deep Reinforcement Learning Q-Learning +2

Temporal Difference Models: Model-Free Deep RL for Model-Based Control

no code implementations ICLR 2018 Vitchyr Pong, Shixiang Gu, Murtaza Dalal, Sergey Levine

TDMs combine the benefits of model-free and model-based RL: they leverage the rich information in state transitions to learn very efficiently, while still attaining asymptotic performance that exceeds that of direct model-based RL methods.

continuous-control Continuous Control +4

Cannot find the paper you are looking for? You can Submit a new open access paper.