Search Results for author: Mustafa Mukadam

Found 15 papers, 9 papers with code

A Differentiable Recipe for Learning Visual Non-Prehensile Planar Manipulation

1 code implementation9 Nov 2021 Bernardo Aceituno, Alberto Rodriguez, Shubham Tulsiani, Abhinav Gupta, Mustafa Mukadam

Specifying tasks with videos is a powerful technique towards acquiring novel and general robot skills.

No RL, No Simulation: Learning to Navigate without Navigating

1 code implementation NeurIPS 2021 Meera Hahn, Devendra Chaplot, Shubham Tulsiani, Mustafa Mukadam, James M. Rehg, Abhinav Gupta

Most prior methods for learning navigation policies require access to simulation environments, as they need online policy interaction and rely on ground-truth maps for rewards.

reinforcement-learning

Learning Complex Geometric Structures from Data with Deep Riemannian Manifolds

no code implementations29 Sep 2021 Aaron Lou, Maximilian Nickel, Mustafa Mukadam, Brandon Amos

We present Deep Riemannian Manifolds, a new class of neural network parameterized Riemannian manifolds that can represent and learn complex geometric structures.

Where2Act: From Pixels to Actions for Articulated 3D Objects

1 code implementation ICCV 2021 Kaichun Mo, Leonidas Guibas, Mustafa Mukadam, Abhinav Gupta, Shubham Tulsiani

One of the fundamental goals of visual perception is to allow agents to meaningfully interact with their environment.

Learning Tactile Models for Factor Graph-based Estimation

no code implementations7 Dec 2020 Paloma Sodhi, Michael Kaess, Mustafa Mukadam, Stuart Anderson

In order to incorporate tactile measurements in the graph, we need local observation models that can map high-dimensional tactile images onto a low-dimensional state space.

Object Tracking

Neural Dynamic Policies for End-to-End Sensorimotor Learning

no code implementations NeurIPS 2020 Shikhar Bahl, Mustafa Mukadam, Abhinav Gupta, Deepak Pathak

We show that NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks for both imitation and reinforcement learning setups.

Imitation Learning reinforcement-learning

Batteries, camera, action! Learning a semantic control space for expressive robot cinematography

no code implementations19 Nov 2020 Rogerio Bonatti, Arthur Bucker, Sebastian Scherer, Mustafa Mukadam, Jessica Hodgins

First, we generate a database of video clips with a diverse range of shots in a photo-realistic simulator, and use hundreds of participants in a crowd-sourcing framework to obtain scores for a set of semantic descriptors for each clip.

Riemannian Motion Policy Fusion through Learnable Lyapunov Function Reshaping

no code implementations7 Oct 2019 Mustafa Mukadam, Ching-An Cheng, Dieter Fox, Byron Boots, Nathan Ratliff

RMPfusion supplements RMPflow with weight functions that can hierarchically reshape the Lyapunov functions of the subtask RMPs according to the current configuration of the robot and environment.

Imitation Learning

Multi-Objective Policy Generation for Multi-Robot Systems Using Riemannian Motion Policies

1 code implementation14 Feb 2019 Anqi Li, Mustafa Mukadam, Magnus Egerstedt, Byron Boots

We propose a collection of RMPs for simple multi-robot tasks that can be used for building controllers for more complicated tasks.

Robotics

RMPflow: A Computational Graph for Automatic Motion Policy Generation

1 code implementation16 Nov 2018 Ching-An Cheng, Mustafa Mukadam, Jan Issac, Stan Birchfield, Dieter Fox, Byron Boots, Nathan Ratliff

We develop a novel policy synthesis algorithm, RMPflow, based on geometrically consistent transformations of Riemannian Motion Policies (RMPs).

Robotics Systems and Control

Continuous-Time Gaussian Process Motion Planning via Probabilistic Inference

1 code implementation24 Jul 2017 Mustafa Mukadam, Jing Dong, Xinyan Yan, Frank Dellaert, Byron Boots

We benchmark our algorithms against several sampling-based and trajectory optimization-based motion planning algorithms on planning problems in multiple environments.

Robotics

Cannot find the paper you are looking for? You can Submit a new open access paper.