Search Results for author: Yunzhu Li

Found 36 papers, 12 papers with code

RoboPack: Learning Tactile-Informed Dynamics Models for Dense Packing

no code implementations1 Jul 2024 Bo Ai, Stephen Tian, Haochen Shi, YiXuan Wang, Cheston Tan, Yunzhu Li, Jiajun Wu

Tactile feedback is critical for understanding the dynamics of both rigid and deformable objects in many manipulation tasks, such as non-prehensile manipulation and dense packing.

Graph Neural Network Model Predictive Control

BEHAVIOR Vision Suite: Customizable Dataset Generation via Simulation

no code implementations CVPR 2024 Yunhao Ge, Yihe Tang, Jiashu Xu, Cem Gokmen, Chengshu Li, Wensi Ai, Benjamin Jose Martinez, Arman Aydin, Mona Anvari, Ayush K Chakravarthy, Hong-Xing Yu, Josiah Wong, Sanjana Srivastava, Sharon Lee, Shengxin Zha, Laurent Itti, Yunzhu Li, Roberto Martín-Martín, Miao Liu, Pengchuan Zhang, Ruohan Zhang, Li Fei-Fei, Jiajun Wu

We introduce the BEHAVIOR Vision Suite (BVS), a set of tools and assets to generate fully customized synthetic data for systematic evaluation of computer vision models, based on the newly developed embodied AI benchmark, BEHAVIOR-1K.

Scene Understanding

Reconstruction and Simulation of Elastic Objects with Spring-Mass 3D Gaussians

no code implementations14 Mar 2024 Licheng Zhong, Hong-Xing Yu, Jiajun Wu, Yunzhu Li

In particular, we develop and integrate a 3D Spring-Mass model into 3D Gaussian kernels, enabling the reconstruction of the visual appearance, shape, and physical dynamics of the object.

Future prediction Object

Executable Code Actions Elicit Better LLM Agents

2 code implementations1 Feb 2024 Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji

LLM agents are typically prompted to produce actions by generating JSON or text in a pre-defined format, which is usually limited by constrained action space (e. g., the scope of pre-defined tools) and restricted flexibility (e. g., inability to compose multiple tools).

Language Modelling Large Language Model

Model-Based Control with Sparse Neural Dynamics

no code implementations NeurIPS 2023 Ziang Liu, Genggeng Zhou, Jeff He, Tobia Marcucci, Li Fei-Fei, Jiajun Wu, Yunzhu Li

In this paper, we propose a new framework for integrated model learning and predictive control that is amenable to efficient optimization algorithms.

D$^3$Fields: Dynamic 3D Descriptor Fields for Zero-Shot Generalizable Robotic Manipulation

no code implementations28 Sep 2023 YiXuan Wang, Zhuoran Li, Mingtong Zhang, Katherine Driggs-Campbell, Jiajun Wu, Li Fei-Fei, Yunzhu Li

These fields capture the dynamics of the underlying 3D environment and encode both semantic features and instance masks.

VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models

1 code implementation12 Jul 2023 Wenlong Huang, Chen Wang, Ruohan Zhang, Yunzhu Li, Jiajun Wu, Li Fei-Fei

The composed value maps are then used in a model-based planning framework to zero-shot synthesize closed-loop robot trajectories with robustness to dynamic perturbations.

Language Modelling Robot Manipulation

Dynamic-Resolution Model Learning for Object Pile Manipulation

no code implementations29 Jun 2023 YiXuan Wang, Yunzhu Li, Katherine Driggs-Campbell, Li Fei-Fei, Jiajun Wu

Prior works typically assume representation at a fixed dimension or resolution, which may be inefficient for simple tasks and ineffective for more complicated tasks.

Model Predictive Control Object

The ObjectFolder Benchmark: Multisensory Learning with Neural and Real Objects

no code implementations CVPR 2023 Ruohan Gao, Yiming Dou, Hao Li, Tanmay Agarwal, Jeannette Bohg, Yunzhu Li, Li Fei-Fei, Jiajun Wu

We introduce the ObjectFolder Benchmark, a benchmark suite of 10 tasks for multisensory object-centric learning, centered around object recognition, reconstruction, and manipulation with sight, sound, and touch.

Benchmarking Object +1

Partial-View Object View Synthesis via Filtered Inversion

no code implementations3 Apr 2023 Fan-Yun Sun, Jonathan Tremblay, Valts Blukis, Kevin Lin, Danfei Xu, Boris Ivanovic, Peter Karkus, Stan Birchfield, Dieter Fox, Ruohan Zhang, Yunzhu Li, Jiajun Wu, Marco Pavone, Nick Haber

At inference, given one or more views of a novel real-world object, FINV first finds a set of latent codes for the object by inverting the generative model from multiple initial seeds.

Object

Planning with Spatial-Temporal Abstraction from Point Clouds for Deformable Object Manipulation

no code implementations27 Oct 2022 Xingyu Lin, Carl Qi, Yunchu Zhang, Zhiao Huang, Katerina Fragkiadaki, Yunzhu Li, Chuang Gan, David Held

Effective planning of long-horizon deformable object manipulation requires suitable abstractions at both the spatial and temporal levels.

Deformable Object Manipulation

Does Learning from Decentralized Non-IID Unlabeled Data Benefit from Self Supervision?

1 code implementation20 Oct 2022 Lirui Wang, Kaiqing Zhang, Yunzhu Li, Yonglong Tian, Russ Tedrake

Decentralized learning has been advocated and widely deployed to make efficient use of distributed datasets, with an extensive focus on supervised learning (SL) problems.

Contrastive Learning Representation Learning +1

Reinforcement Learning with Neural Radiance Fields

no code implementations3 Jun 2022 Danny Driess, Ingmar Schubert, Pete Florence, Yunzhu Li, Marc Toussaint

This paper demonstrates that learning state representations with supervision from Neural Radiance Fields (NeRFs) can improve the performance of RL compared to other learned representations or even low-dimensional, hand-engineered state information.

Decoder reinforcement-learning +1

RoboCraft: Learning to See, Simulate, and Shape Elasto-Plastic Objects with Graph Networks

no code implementations5 May 2022 Haochen Shi, Huazhe Xu, Zhiao Huang, Yunzhu Li, Jiajun Wu

Our learned model-based planning framework is comparable to and sometimes better than human subjects on the tested tasks.

Model Predictive Control

ComPhy: Compositional Physical Reasoning of Objects and Events from Videos

no code implementations ICLR 2022 Zhenfang Chen, Kexin Yi, Yunzhu Li, Mingyu Ding, Antonio Torralba, Joshua B. Tenenbaum, Chuang Gan

In this paper, we take an initial step to highlight the importance of inferring the hidden physical properties not directly observable from visual appearances, by introducing the Compositional Physical Reasoning (ComPhy) dataset.

Learning Multi-Object Dynamics with Compositional Neural Radiance Fields

no code implementations24 Feb 2022 Danny Driess, Zhiao Huang, Yunzhu Li, Russ Tedrake, Marc Toussaint

We present a method to learn compositional multi-object dynamics models from image observations based on implicit object encoders, Neural Radiance Fields (NeRFs), and graph neural networks.

Decoder Graph Neural Network +1

Dynamic Modeling of Hand-Object Interactions via Tactile Sensing

no code implementations9 Sep 2021 Qiang Zhang, Yunzhu Li, Yiyue Luo, Wan Shou, Michael Foshey, Junchi Yan, Joshua B. Tenenbaum, Wojciech Matusik, Antonio Torralba

This work takes a step on dynamics modeling in hand-object interactions from dense tactile sensing, which opens the door for future applications in activity learning, human-computer interactions, and imitation learning for robotics.

Contrastive Learning Imitation Learning +1

Intelligent Carpet: Inferring 3D Human Pose From Tactile Signals

no code implementations CVPR 2021 Yiyue Luo, Yunzhu Li, Michael Foshey, Wan Shou, Pratyusha Sharma, Tomas Palacios, Antonio Torralba, Wojciech Matusik

In this work, leveraging such tactile interactions, we propose a 3D human pose estimation approach using the pressure maps recorded by a tactile carpet as input.

3D Human Pose Estimation Multi-Person Pose Estimation

Causal Discovery in Physical Systems from Videos

1 code implementation NeurIPS 2020 Yunzhu Li, Antonio Torralba, Animashree Anandkumar, Dieter Fox, Animesh Garg

We assume access to different configurations and environmental conditions, i. e., data from unknown interventions on the underlying system; thus, we can hope to discover the correct underlying causal graph without explicit interventions.

Causal Discovery counterfactual

Learning Physical Graph Representations from Visual Scenes

1 code implementation NeurIPS 2020 Daniel M. Bear, Chaofei Fan, Damian Mrowca, Yunzhu Li, Seth Alter, Aran Nayebi, Jeremy Schwartz, Li Fei-Fei, Jiajun Wu, Joshua B. Tenenbaum, Daniel L. K. Yamins

To overcome these limitations, we introduce the idea of Physical Scene Graphs (PSGs), which represent scenes as hierarchical graphs, with nodes in the hierarchy corresponding intuitively to object parts at different scales, and edges to physical connections between parts.

Object Object Categorization +1

Visual Grounding of Learned Physical Models

1 code implementation ICML 2020 Yunzhu Li, Toru Lin, Kexin Yi, Daniel M. Bear, Daniel L. K. Yamins, Jiajun Wu, Joshua B. Tenenbaum, Antonio Torralba

The abilities to perform physical reasoning and to adapt to new environments, while intrinsic to humans, remain challenging to state-of-the-art computational models.

Visual Grounding

Learning Compositional Koopman Operators for Model-Based Control

no code implementations ICLR 2020 Yunzhu Li, Hao He, Jiajun Wu, Dina Katabi, Antonio Torralba

Finding an embedding space for a linear approximation of a nonlinear dynamical system enables efficient system identification and control synthesis.

CLEVRER: CoLlision Events for Video REpresentation and Reasoning

3 code implementations ICLR 2020 Kexin Yi, Chuang Gan, Yunzhu Li, Pushmeet Kohli, Jiajun Wu, Antonio Torralba, Joshua B. Tenenbaum

While these models thrive on the perception-based task (descriptive), they perform poorly on the causal tasks (explanatory, predictive and counterfactual), suggesting that a principled approach for causal reasoning should incorporate the capability of both perceiving complex visual and language inputs, and understanding the underlying dynamics and causal relations.

counterfactual Descriptive +1

Connecting Touch and Vision via Cross-Modal Prediction

1 code implementation CVPR 2019 Yunzhu Li, Jun-Yan Zhu, Russ Tedrake, Antonio Torralba

To connect vision and touch, we introduce new tasks of synthesizing plausible tactile signals from visual inputs as well as imagining how we interact with objects given tactile data as input.

Learning the signatures of the human grasp using a scalable tactile glove

no code implementations journal 2019 Subramanian Sundaram, Petr Kellnhofer, Yunzhu Li, Jun-Yan Zhu, Antonio Torralba & Wojciech Matusik

Using a low-cost (about US$10) scalable tactile glove sensor array, we record a large-scale tactile dataset with 135, 000 frames, each covering the full hand, while interacting with 26 different objects.

Propagation Networks for Model-Based Control Under Partial Observation

1 code implementation28 Sep 2018 Yunzhu Li, Jiajun Wu, Jun-Yan Zhu, Joshua B. Tenenbaum, Antonio Torralba, Russ Tedrake

There has been an increasing interest in learning dynamics simulators for model-based control.

InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations

4 code implementations NeurIPS 2017 Yunzhu Li, Jiaming Song, Stefano Ermon

The goal of imitation learning is to mimic expert behavior without access to an explicit reward signal.

Imitation Learning

Skin Cancer Detection and Tracking using Data Synthesis and Deep Learning

no code implementations4 Dec 2016 Yunzhu Li, Andre Esteva, Brett Kuprel, Rob Novoa, Justin Ko, Sebastian Thrun

Dense object detection and temporal tracking are needed across applications domains ranging from people-tracking to analysis of satellite imagery over time.

Dense Object Detection object-detection

Face Detection with End-to-End Integration of a ConvNet and a 3D Model

5 code implementations2 Jun 2016 Yunzhu Li, Benyuan Sun, Tianfu Wu, Yizhou Wang

The proposed method addresses two issues in adapting state- of-the-art generic object detection ConvNets (e. g., faster R-CNN) for face detection: (i) One is to eliminate the heuristic design of prede- fined anchor boxes in the region proposals network (RPN) by exploit- ing a 3D mean face model.

Face Detection Face Model +3

Cannot find the paper you are looking for? You can Submit a new open access paper.