Search Results for author: Mustafa Mukadam

Found 23 papers, 14 papers with code

Continuous-Time Gaussian Process Motion Planning via Probabilistic Inference

1 code implementation24 Jul 2017 Mustafa Mukadam, Jing Dong, Xinyan Yan, Frank Dellaert, Byron Boots

We benchmark our algorithms against several sampling-based and trajectory optimization-based motion planning algorithms on planning problems in multiple environments.

Robotics

RMPflow: A Computational Graph for Automatic Motion Policy Generation

1 code implementation16 Nov 2018 Ching-An Cheng, Mustafa Mukadam, Jan Issac, Stan Birchfield, Dieter Fox, Byron Boots, Nathan Ratliff

We develop a novel policy synthesis algorithm, RMPflow, based on geometrically consistent transformations of Riemannian Motion Policies (RMPs).

Robotics Systems and Control

Multi-Objective Policy Generation for Multi-Robot Systems Using Riemannian Motion Policies

1 code implementation14 Feb 2019 Anqi Li, Mustafa Mukadam, Magnus Egerstedt, Byron Boots

We propose a collection of RMPs for simple multi-robot tasks that can be used for building controllers for more complicated tasks.

Robotics

Riemannian Motion Policy Fusion through Learnable Lyapunov Function Reshaping

no code implementations7 Oct 2019 Mustafa Mukadam, Ching-An Cheng, Dieter Fox, Byron Boots, Nathan Ratliff

RMPfusion supplements RMPflow with weight functions that can hierarchically reshape the Lyapunov functions of the subtask RMPs according to the current configuration of the robot and environment.

Imitation Learning

Batteries, camera, action! Learning a semantic control space for expressive robot cinematography

no code implementations19 Nov 2020 Rogerio Bonatti, Arthur Bucker, Sebastian Scherer, Mustafa Mukadam, Jessica Hodgins

First, we generate a database of video clips with a diverse range of shots in a photo-realistic simulator, and use hundreds of participants in a crowd-sourcing framework to obtain scores for a set of semantic descriptors for each clip.

Neural Dynamic Policies for End-to-End Sensorimotor Learning

no code implementations NeurIPS 2020 Shikhar Bahl, Mustafa Mukadam, Abhinav Gupta, Deepak Pathak

We show that NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks for both imitation and reinforcement learning setups.

Imitation Learning reinforcement-learning +1

Learning Tactile Models for Factor Graph-based Estimation

no code implementations7 Dec 2020 Paloma Sodhi, Michael Kaess, Mustafa Mukadam, Stuart Anderson

In order to incorporate tactile measurements in the graph, we need local observation models that can map high-dimensional tactile images onto a low-dimensional state space.

Object Object Tracking

Where2Act: From Pixels to Actions for Articulated 3D Objects

1 code implementation ICCV 2021 Kaichun Mo, Leonidas Guibas, Mustafa Mukadam, Abhinav Gupta, Shubham Tulsiani

One of the fundamental goals of visual perception is to allow agents to meaningfully interact with their environment.

Learning Complex Geometric Structures from Data with Deep Riemannian Manifolds

no code implementations29 Sep 2021 Aaron Lou, Maximilian Nickel, Mustafa Mukadam, Brandon Amos

We present Deep Riemannian Manifolds, a new class of neural network parameterized Riemannian manifolds that can represent and learn complex geometric structures.

No RL, No Simulation: Learning to Navigate without Navigating

1 code implementation NeurIPS 2021 Meera Hahn, Devendra Chaplot, Shubham Tulsiani, Mustafa Mukadam, James M. Rehg, Abhinav Gupta

Most prior methods for learning navigation policies require access to simulation environments, as they need online policy interaction and rely on ground-truth maps for rewards.

Navigate Reinforcement Learning (RL)

Theseus: A Library for Differentiable Nonlinear Optimization

1 code implementation19 Jul 2022 Luis Pineda, Taosha Fan, Maurizio Monge, Shobha Venkataraman, Paloma Sodhi, Ricky T. Q. Chen, Joseph Ortiz, Daniel DeTone, Austin Wang, Stuart Anderson, Jing Dong, Brandon Amos, Mustafa Mukadam

We present Theseus, an efficient application-agnostic open source library for differentiable nonlinear least squares (DNLS) optimization built on PyTorch, providing a common framework for end-to-end structured learning in robotics and vision.

Neural Contact Fields: Tracking Extrinsic Contact with Tactile Sensing

1 code implementation17 Oct 2022 Carolina Higuera, Siyuan Dong, Byron Boots, Mustafa Mukadam

In experiments, we find that Neural Contact Fields are able to localize multiple contact patches without making any assumptions about the geometry of the contact, and capture contact/no-contact transitions for known categories of objects with unseen shapes in unseen environment configurations.

USA-Net: Unified Semantic and Affordance Representations for Robot Memory

no code implementations24 Apr 2023 Benjamin Bolte, Austin Wang, Jimmy Yang, Mustafa Mukadam, Mrinal Kalakrishnan, Chris Paxton

In order for robots to follow open-ended instructions like "go open the brown cabinet over the sink", they require an understanding of both the scene geometry and the semantics of their environment.

Navigate

Decentralization and Acceleration Enables Large-Scale Bundle Adjustment

1 code implementation11 May 2023 Taosha Fan, Joseph Ortiz, Ming Hsiao, Maurizio Monge, Jing Dong, Todd Murphey, Mustafa Mukadam

In this paper, we present a fully decentralized method that alleviates computation and communication bottlenecks to solve arbitrarily large bundle adjustment problems.

TaskMet: Task-Driven Metric Learning for Model Learning

no code implementations NeurIPS 2023 Dishank Bansal, Ricky T. Q. Chen, Mustafa Mukadam, Brandon Amos

We propose take the task loss signal one level deeper than the parameters of the model and use it to learn the parameters of the loss function the model is trained on, which can be done by learning a metric in the prediction space.

Metric Learning Portfolio Optimization

A Touch, Vision, and Language Dataset for Multimodal Alignment

1 code implementation20 Feb 2024 Letian Fu, Gaurav Datta, Huang Huang, William Chung-Ho Panitch, Jaimyn Drake, Joseph Ortiz, Mustafa Mukadam, Mike Lambeta, Roberto Calandra, Ken Goldberg

This is partially due to the difficulty of obtaining natural language labels for tactile data and the complexity of aligning tactile readings with both visual observations and language descriptions.

Language Modelling Text Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.