Search Results for author: Zhao Mandi

Found 6 papers, 2 papers with code

MD-Splatting: Learning Metric Deformation from 4D Gaussians in Highly Deformable Scenes

no code implementations30 Nov 2023 Bardienus P. Duisterhof, Zhao Mandi, Yunchao Yao, Jia-Wei Liu, Mike Zheng Shou, Shuran Song, Jeffrey Ichnowski

MD-Splatting builds on recent advances in Gaussian splatting, a method that learns the properties of a large number of Gaussians for state-of-the-art and fast novel view synthesis.

Novel View Synthesis

RoCo: Dialectic Multi-Robot Collaboration with Large Language Models

1 code implementation10 Jul 2023 Zhao Mandi, Shreeya Jain, Shuran Song

We propose a novel approach to multi-robot collaboration that harnesses the power of pre-trained large language models (LLMs) for both high-level communication and low-level path planning.

Trajectory Planning

CACTI: A Framework for Scalable Multi-Task Multi-Scene Visual Imitation Learning

no code implementations12 Dec 2022 Zhao Mandi, Homanga Bharadhwaj, Vincent Moens, Shuran Song, Aravind Rajeswaran, Vikash Kumar

On a real robot setup, CACTI enables efficient training of a single policy that can perform 10 manipulation tasks involving kitchen objects, and is robust to varying layouts of distractors.

Data Augmentation Image Generation +3

On the Effectiveness of Fine-tuning Versus Meta-reinforcement Learning

no code implementations7 Jun 2022 Zhao Mandi, Pieter Abbeel, Stephen James

From these findings, we advocate for evaluating future meta-RL methods on more challenging tasks and including multi-task pretraining with fine-tuning as a simple, yet strong baseline.

Meta-Learning Meta Reinforcement Learning +4

Towards More Generalizable One-shot Visual Imitation Learning

no code implementations26 Oct 2021 Zhao Mandi, Fangchen Liu, Kimin Lee, Pieter Abbeel

We then study the multi-task setting, where multi-task training is followed by (i) one-shot imitation on variations within the training tasks, (ii) one-shot imitation on new tasks, and (iii) fine-tuning on new tasks.

Contrastive Learning Imitation Learning +2

DCUR: Data Curriculum for Teaching via Samples with Reinforcement Learning

1 code implementation15 Sep 2021 Daniel Seita, Abhinav Gopal, Zhao Mandi, John Canny

Then, students learn by running either offline RL or by using teacher data in combination with a small amount of self-generated data.

Offline RL reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.