Search Results for author: Jiajun Wu

Found 199 papers, 58 papers with code

Efficient Imitation Learning with Conservative World Models

no code implementations21 May 2024 Victor Kolev, Rafael Rafailov, Kyle Hatch, Jiajun Wu, Chelsea Finn

One approach to this issue is to learn a world model of the environment, and use synthetic data for policy training.

TRANSIC: Sim-to-Real Policy Transfer by Learning from Online Correction

no code implementations16 May 2024 Yunfan Jiang, Chen Wang, Ruohan Zhang, Jiajun Wu, Li Fei-Fei

Through synergistic integration of policies learned in simulation and from humans, TRANSIC is effective as a holistic approach to addressing various, often coexisting sim-to-real gaps.

BEHAVIOR Vision Suite: Customizable Dataset Generation via Simulation

no code implementations15 May 2024 Yunhao Ge, Yihe Tang, Jiashu Xu, Cem Gokmen, Chengshu Li, Wensi Ai, Benjamin Jose Martinez, Arman Aydin, Mona Anvari, Ayush K Chakravarthy, Hong-Xing Yu, Josiah Wong, Sanjana Srivastava, Sharon Lee, Shengxin Zha, Laurent Itti, Yunzhu Li, Roberto Martín-Martín, Miao Liu, Pengchuan Zhang, Ruohan Zhang, Li Fei-Fei, Jiajun Wu

We introduce the BEHAVIOR Vision Suite (BVS), a set of tools and assets to generate fully customized synthetic data for systematic evaluation of computer vision models, based on the newly developed embodied AI benchmark, BEHAVIOR-1K.

Scene Understanding

Composable Part-Based Manipulation

no code implementations9 May 2024 Weiyu Liu, Jiayuan Mao, Joy Hsu, Tucker Hermans, Animesh Garg, Jiajun Wu

In this paper, we propose composable part-based manipulation (CPM), a novel approach that leverages object-part decomposition and part-part correspondences to improve learning and generalization of robotic manipulation skills.


Learning Planning Abstractions from Language

no code implementations6 May 2024 Weiyu Liu, Geng Chen, Joy Hsu, Jiayuan Mao, Jiajun Wu

This paper presents a framework for learning state and action abstractions in sequential decision-making domains.

Decision Making

Naturally Supervised 3D Visual Grounding with Language-Regularized Concept Learners

no code implementations30 Apr 2024 Chun Feng, Joy Hsu, Weiyu Liu, Jiajun Wu

We propose the Language-Regularized Concept Learner (LARC), which uses constraints from language as regularization to significantly improve the accuracy of neuro-symbolic concept learners in the naturally supervised setting.

Visual Grounding Visual Reasoning

PhysDreamer: Physics-Based Interaction with 3D Objects via Video Generation

no code implementations19 Apr 2024 Tianyuan Zhang, Hong-Xing Yu, Rundi Wu, Brandon Y. Feng, Changxi Zheng, Noah Snavely, Jiajun Wu, William T. Freeman

Unlike unconditional or text-conditioned dynamics generation, action-conditioned dynamics requires perceiving the physical material properties of objects and grounding the 3D motion prediction on these properties, such as object stiffness.

motion prediction Object +1

Tripod: Three Complementary Inductive Biases for Disentangled Representation Learning

1 code implementation16 Apr 2024 Kyle Hsu, Jubayer Ibn Hamid, Kaylee Burns, Chelsea Finn, Jiajun Wu

Inductive biases are crucial in disentangled representation learning for narrowing down an underspecified solution set.

Data Compression Decoder +2

Text-Based Reasoning About Vector Graphics

no code implementations9 Apr 2024 Zhenhailong Wang, Joy Hsu, Xingyao Wang, Kuan-Hao Huang, Manling Li, Jiajun Wu, Heng Ji

By casting an image to a text-based representation, we can leverage the power of language models to learn alignment from SVG to visual primitives and generalize to unseen question-answering tasks.

Descriptive Language Modelling +2

3D Congealing: 3D-Aware Image Alignment in the Wild

no code implementations2 Apr 2024 Yunzhi Zhang, Zizhang Li, Amit Raj, Andreas Engelhardt, Yuanzhen Li, Tingbo Hou, Jiajun Wu, Varun Jampani

The framework optimizes for the canonical representation together with the pose for each input image, and a per-image coordinate map that warps 2D pixel coordinates to the 3D canonical frame to account for the shape matching.

Pose Estimation

Reconstruction and Simulation of Elastic Objects with Spring-Mass 3D Gaussians

no code implementations14 Mar 2024 Licheng Zhong, Hong-Xing Yu, Jiajun Wu, Yunzhu Li

In particular, we develop and integrate a 3D Spring-Mass model into 3D Gaussian kernels, enabling the reconstruction of the visual appearance, shape, and physical dynamics of the object.

Future prediction Object

Unsupervised Discovery of Object-Centric Neural Fields

no code implementations12 Feb 2024 Rundong Luo, Hong-Xing Yu, Jiajun Wu

Extensive experiments show that uOCF enables unsupervised discovery of visually rich objects from a single real image, allowing applications such as 3D object segmentation and scene manipulation.

Object Object Discovery +3

Learning the 3D Fauna of the Web

no code implementations4 Jan 2024 Zizhang Li, Dor Litvak, Ruining Li, Yunzhi Zhang, Tomas Jakab, Christian Rupprecht, Shangzhe Wu, Andrea Vedaldi, Jiajun Wu

We show that prior category-specific attempts fail to generalize to rare species with limited training images.

Fluid Simulation on Neural Flow Maps

no code implementations22 Dec 2023 Yitong Deng, Hong-Xing Yu, Diyang Zhang, Jiajun Wu, Bo Zhu

We introduce Neural Flow Maps, a novel simulation method bridging the emerging paradigm of implicit neural representations with fluid simulation based on the theory of flow maps, to achieve state-of-the-art simulation of inviscid fluid phenomena.

Ponymation: Learning 3D Animal Motions from Unlabeled Online Videos

no code implementations21 Dec 2023 Keqiang Sun, Dor Litvak, Yunzhi Zhang, Hongsheng Li, Jiajun Wu, Shangzhe Wu

We introduce Ponymation, a new method for learning a generative model of articulated 3D animal motions from raw, unlabeled online videos.

Motion Synthesis

Model-Based Control with Sparse Neural Dynamics

no code implementations NeurIPS 2023 Ziang Liu, Genggeng Zhou, Jeff He, Tobia Marcucci, Li Fei-Fei, Jiajun Wu, Yunzhu Li

In this paper, we propose a new framework for integrated model learning and predictive control that is amenable to efficient optimization algorithms.

SkyScript: A Large and Semantically Diverse Vision-Language Dataset for Remote Sensing

1 code implementation20 Dec 2023 Zhecheng Wang, Rajanie Prabha, Tianyuan Huang, Jiajun Wu, Ram Rajagopal

Remote sensing imagery, despite its broad applications in helping achieve Sustainable Development Goals and tackle climate change, has not yet benefited from the recent advancements of versatile, task-agnostic vision language models (VLMs).

Attribute Cross-Modal Retrieval +3

Inferring Hybrid Neural Fluid Fields from Videos

no code implementations NeurIPS 2023 Hong-Xing Yu, Yang Zheng, Yuan Gao, Yitong Deng, Bo Zhu, Jiajun Wu

Specifically, to deal with visual ambiguities of fluid velocity, we introduce a set of physics-based losses that enforce inferring a physically plausible velocity field, which is divergence-free and drives the transport of density.

Dynamic Reconstruction Future prediction

Language-Informed Visual Concept Learning

no code implementations6 Dec 2023 Sharon Lee, Yunzhi Zhang, Shangzhe Wu, Jiajun Wu

To encourage better disentanglement of different concept encoders, we anchor the concept embeddings to a set of text embeddings obtained from a pre-trained Visual Question Answering (VQA) model.

Disentanglement Novel Concepts +2

Controllable Human-Object Interaction Synthesis

no code implementations6 Dec 2023 Jiaman Li, Alexander Clegg, Roozbeh Mottaghi, Jiajun Wu, Xavier Puig, C. Karen Liu

Naively applying a diffusion model fails to predict object motion aligned with the input waypoints and cannot ensure the realism of interactions that require precise hand-object contact and appropriate contact grounded by the floor.

Human-Object Interaction Detection Object

SoundCam: A Dataset for Finding Humans Using Room Acoustics

no code implementations NeurIPS 2023 Mason Wang, Samuel Clarke, Jui-Hsien Wang, Ruohan Gao, Jiajun Wu

A room's acoustic properties are a product of the room's geometry, the objects within the room, and their specific positions.

NOIR: Neural Signal Operated Intelligent Robots for Everyday Activities

no code implementations2 Nov 2023 Ruohan Zhang, Sharon Lee, Minjune Hwang, Ayano Hiranaka, Chen Wang, Wensi Ai, Jin Jie Ryan Tan, Shreya Gupta, Yilun Hao, Gabrael Levine, Ruohan Gao, Anthony Norcia, Li Fei-Fei, Jiajun Wu

We present Neural Signal Operated Intelligent Robots (NOIR), a general-purpose, intelligent brain-robot interface system that enables humans to command robots to perform everyday activities through brain signals.


Learning to Design and Use Tools for Robotic Manipulation

no code implementations1 Nov 2023 Ziang Liu, Stephen Tian, Michelle Guo, C. Karen Liu, Jiajun Wu

A designer policy is conditioned on task information and outputs a tool design that helps solve the task.

ZeroNVS: Zero-Shot 360-Degree View Synthesis from a Single Image

no code implementations27 Oct 2023 Kyle Sargent, Zizhang Li, Tanmay Shah, Charles Herrmann, Hong-Xing Yu, Yunzhi Zhang, Eric Ryan Chan, Dmitry Lagun, Li Fei-Fei, Deqing Sun, Jiajun Wu

Further, we observe that Score Distillation Sampling (SDS) tends to truncate the distribution of complex backgrounds during distillation of 360-degree scenes, and propose "SDS anchoring" to improve the diversity of synthesized novel views.

Novel View Synthesis

What's Left? Concept Grounding with Logic-Enhanced Foundation Models

1 code implementation24 Oct 2023 Joy Hsu, Jiayuan Mao, Joshua B. Tenenbaum, Jiajun Wu

We propose the Logic-Enhanced Foundation Model (LEFT), a unified framework that learns to ground and reason with concepts across domains with a differentiable, domain-independent, first-order logic-based program executor.

Visual Reasoning

CLEVRER-Humans: Describing Physical and Causal Events the Human Way

no code implementations5 Oct 2023 Jiayuan Mao, Xuelin Yang, Xikun Zhang, Noah D. Goodman, Jiajun Wu

First, there is a lack of diversity in both event types and natural language descriptions; second, causal relationships based on manually-defined heuristics are different from human judgments.

Causal Judgment Data Augmentation +1

Mini-BEHAVIOR: A Procedurally Generated Benchmark for Long-horizon Decision-Making in Embodied AI

1 code implementation3 Oct 2023 Emily Jin, Jiaheng Hu, Zhuoyi Huang, Ruohan Zhang, Jiajun Wu, Li Fei-Fei, Roberto Martín-Martín

We present Mini-BEHAVIOR, a novel benchmark for embodied AI that challenges agents to use reasoning and decision-making skills to solve complex activities that resemble everyday human challenges.

Decision Making

Object Motion Guided Human Motion Synthesis

no code implementations28 Sep 2023 Jiaman Li, Jiajun Wu, C. Karen Liu

We propose Object MOtion guided human MOtion synthesis (OMOMO), a conditional diffusion framework that can generate full-body manipulation behaviors from only the object motion.

Denoising Human-Object Interaction Detection +2

D$^3$Fields: Dynamic 3D Descriptor Fields for Zero-Shot Generalizable Robotic Manipulation

no code implementations28 Sep 2023 YiXuan Wang, Zhuoran Li, Mingtong Zhang, Katherine Driggs-Campbell, Jiajun Wu, Li Fei-Fei, Yunzhu Li

These fields capture the dynamics of the underlying 3D environment and encode both semantic features and instance masks.

Tree-Structured Shading Decomposition

no code implementations ICCV 2023 Chen Geng, Hong-Xing Yu, Sharon Zhang, Maneesh Agrawala, Jiajun Wu

The shade tree representation enables novice users who are unfamiliar with the physical shading process to edit object shading in an efficient and intuitive manner.


Learning Sequential Acquisition Policies for Robot-Assisted Feeding

no code implementations11 Sep 2023 Priya Sundaresan, Jiajun Wu, Dorsa Sadigh

A robot providing mealtime assistance must perform specialized maneuvers with various utensils in order to pick up and feed a range of food items.

Physically Grounded Vision-Language Models for Robotic Manipulation

no code implementations5 Sep 2023 Jensen Gao, Bidipta Sarkar, Fei Xia, Ted Xiao, Jiajun Wu, Brian Ichter, Anirudha Majumdar, Dorsa Sadigh

We incorporate this physically grounded VLM in an interactive framework with a large language model-based robotic planner, and show improved planning performance on tasks that require reasoning about physical object concepts, compared to baselines that do not leverage physically grounded VLMs.

Image Captioning Language Modelling +4

Compositional Diffusion-Based Continuous Constraint Solvers

no code implementations2 Sep 2023 Zhutian Yang, Jiayuan Mao, Yilun Du, Jiajun Wu, Joshua B. Tenenbaum, Tomás Lozano-Pérez, Leslie Pack Kaelbling

This paper introduces an approach for learning to solve continuous constraint satisfaction problems (CCSP) in robotic reasoning and planning.

Patched Denoising Diffusion Models For High-Resolution Image Synthesis

1 code implementation2 Aug 2023 Zheng Ding, Mengqi Zhang, Jiajun Wu, Zhuowen Tu

Feature collage systematically crops and combines partial features of the neighboring patches to predict the features of a shifted image patch, allowing the seamless generation of the entire image due to the overlap in the patch feature space.

Denoising Image Generation

Primitive Skill-based Robot Learning from Human Evaluative Feedback

no code implementations28 Jul 2023 Ayano Hiranaka, Minjune Hwang, Sharon Lee, Chen Wang, Li Fei-Fei, Jiajun Wu, Ruohan Zhang

By combining them, SEED reduces the human effort required in RLHF and increases safety in training robot manipulation with RL in real-world settings.

reinforcement-learning Reinforcement Learning (RL) +1

Giving Robots a Hand: Learning Generalizable Manipulation with Eye-in-Hand Human Video Demonstrations

no code implementations12 Jul 2023 Moo Jin Kim, Jiajun Wu, Chelsea Finn

Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.

Domain Adaptation

VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models

1 code implementation12 Jul 2023 Wenlong Huang, Chen Wang, Ruohan Zhang, Yunzhu Li, Jiajun Wu, Li Fei-Fei

The composed value maps are then used in a model-based planning framework to zero-shot synthesize closed-loop robot trajectories with robustness to dynamic perturbations.

Language Modelling Robot Manipulation

Dynamic-Resolution Model Learning for Object Pile Manipulation

no code implementations29 Jun 2023 YiXuan Wang, Yunzhu Li, Katherine Driggs-Campbell, Li Fei-Fei, Jiajun Wu

Prior works typically assume representation at a fixed dimension or resolution, which may be inefficient for simple tasks and ineffective for more complicated tasks.

Model Predictive Control Object

HomE: Homography-Equivariant Video Representation Learning

1 code implementation2 Jun 2023 Anirudh Sriram, Adrien Gaidon, Jiajun Wu, Juan Carlos Niebles, Li Fei-Fei, Ehsan Adeli

In this work, we propose a novel method for representation learning of multi-view videos, where we explicitly model the representation space to maintain Homography Equivariance (HomE).

Action Classification Action Recognition +2

The ObjectFolder Benchmark: Multisensory Learning with Neural and Real Objects

no code implementations CVPR 2023 Ruohan Gao, Yiming Dou, Hao Li, Tanmay Agarwal, Jeannette Bohg, Yunzhu Li, Li Fei-Fei, Jiajun Wu

We introduce the ObjectFolder Benchmark, a benchmark suite of 10 tasks for multisensory object-centric learning, centered around object recognition, reconstruction, and manipulation with sight, sound, and touch.

Benchmarking Object +1

Sonicverse: A Multisensory Simulation Platform for Embodied Household Agents that See and Hear

1 code implementation1 Jun 2023 Ruohan Gao, Hao Li, Gokul Dharan, Zhuzhu Wang, Chengshu Li, Fei Xia, Silvio Savarese, Li Fei-Fei, Jiajun Wu

We introduce Sonicverse, a multisensory simulation platform with integrated audio-visual simulation for training household agents that can both see and hear.

Multi-Task Learning Visual Navigation

Disentanglement via Latent Quantization

1 code implementation NeurIPS 2023 Kyle Hsu, Will Dorrell, James C. R. Whittington, Jiajun Wu, Chelsea Finn

In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.

Disentanglement Inductive Bias +1

Modeling Dynamic Environments with Scene Graph Memory

no code implementations27 May 2023 Andrey Kurenkov, Michael Lingelbach, Tanmay Agarwal, Emily Jin, Chengshu Li, Ruohan Zhang, Li Fei-Fei, Jiajun Wu, Silvio Savarese, Roberto Martín-Martín

We evaluate our method in the Dynamic House Simulator, a new benchmark that creates diverse dynamic graphs following the semantic patterns typically seen at homes, and show that NEP can be trained to predict the locations of objects in a variety of environments with diverse object movement dynamics, outperforming baselines both in terms of new scene adaptability and overall accuracy.

Link Prediction

Motion Question Answering via Modular Motion Programs

1 code implementation15 May 2023 Mark Endo, Joy Hsu, Jiaman Li, Jiajun Wu

In order to build artificial intelligence systems that can perceive and reason with human behavior in the real world, we must first design models that conduct complex spatio-temporal reasoning over motion sequences.

Attribute Question Answering

An Extensible Multimodal Multi-task Object Dataset with Materials

no code implementations29 Apr 2023 Trevor Standley, Ruohan Gao, Dawn Chen, Jiajun Wu, Silvio Savarese

For example, we can train a model to predict the object category from the listing text, or the mass and price from the product listing image.

Attribute Multi-Task Learning +1

Putting People in Their Place: Affordance-Aware Human Insertion into Scenes

1 code implementation CVPR 2023 Sumith Kulal, Tim Brooks, Alex Aiken, Jiajun Wu, Jimei Yang, Jingwan Lu, Alexei A. Efros, Krishna Kumar Singh

Given a scene image with a marked region and an image of a person, we insert the person into the scene while respecting the scene affordances.

Programmatically Grounded, Compositionally Generalizable Robotic Manipulation

no code implementations26 Apr 2023 Renhao Wang, Jiayuan Mao, Joy Hsu, Hang Zhao, Jiajun Wu, Yang Gao

Robots operating in the real world require both rich manipulation skills as well as the ability to semantically reason about when to apply those skills.

Imitation Learning

A Control-Centric Benchmark for Video Prediction

1 code implementation26 Apr 2023 Stephen Tian, Chelsea Finn, Jiajun Wu

Video is a promising source of knowledge for embodied agents to learn models of the world's dynamics.

Video Prediction

Partial-View Object View Synthesis via Filtered Inversion

no code implementations3 Apr 2023 Fan-Yun Sun, Jonathan Tremblay, Valts Blukis, Kevin Lin, Danfei Xu, Boris Ivanovic, Peter Karkus, Stan Birchfield, Dieter Fox, Ruohan Zhang, Yunzhu Li, Jiajun Wu, Marco Pavone, Nick Haber

At inference, given one or more views of a novel real-world object, FINV first finds a set of latent codes for the object by inverting the generative model from multiple initial seeds.


CIRCLE: Capture In Rich Contextual Environments

1 code implementation CVPR 2023 Joao Pedro Araujo, Jiaman Li, Karthik Vetrivel, Rishi Agarwal, Deepak Gopinath, Jiajun Wu, Alexander Clegg, C. Karen Liu

Leveraging our dataset, the model learns to use ego-centric scene information to achieve nontrivial reaching tasks in the context of complex 3D scenes.

Sparse and Local Networks for Hypergraph Reasoning

no code implementations9 Mar 2023 Guangxuan Xiao, Leslie Pack Kaelbling, Jiajun Wu, Jiayuan Mao

Reasoning about the relationships between entities from input facts (e. g., whether Ari is a grandparent of Charlie) generally requires explicit consideration of other entities that are not mentioned in the query (e. g., the parents of Charlie).

Knowledge Graphs World Knowledge

Learning Rational Subgoals from Demonstrations and Instructions

no code implementations9 Mar 2023 Zhezheng Luo, Jiayuan Mao, Jiajun Wu, Tomás Lozano-Pérez, Joshua B. Tenenbaum, Leslie Pack Kaelbling

We present a framework for learning useful subgoals that support efficient long-term planning to achieve novel goals.

FedLE: Federated Learning Client Selection with Lifespan Extension for Edge IoT Networks

no code implementations14 Feb 2023 Jiajun Wu, Steve Drew, Jiayu Zhou

One major challenge preventing the wide adoption of FL in IoT is the pervasive power supply constraints of IoT devices due to the intensive energy consumption of battery-powered clients for local training and model updates.

Federated Learning Privacy Preserving

VQ3D: Learning a 3D-Aware Generative Model on ImageNet

no code implementations ICCV 2023 Kyle Sargent, Jing Yu Koh, Han Zhang, Huiwen Chang, Charles Herrmann, Pratul Srinivasan, Jiajun Wu, Deqing Sun

Recent work has shown the possibility of training generative models of 3D content from 2D image collections on small datasets corresponding to a single object class, such as human faces, animal faces, or cars.

Decoder Position

Topology-aware Federated Learning in Edge Computing: A Comprehensive Survey

no code implementations6 Feb 2023 Jiajun Wu, Steve Drew, Fan Dong, Zhuangdi Zhu, Jiayu Zhou

The ultra-low latency requirements of 5G/6G applications and privacy constraints call for distributed machine learning systems to be deployed at the edge.

Edge-computing Federated Learning

Learning Vortex Dynamics for Fluid Inference and Prediction

no code implementations27 Jan 2023 Yitong Deng, Hong-Xing Yu, Jiajun Wu, Bo Zhu

We propose a novel differentiable vortex particle (DVP) method to infer and predict fluid dynamics from a single video.

Future prediction

Seeing a Rose in Five Thousand Ways

no code implementations CVPR 2023 Yunzhi Zhang, Shangzhe Wu, Noah Snavely, Jiajun Wu

These instances all share the same intrinsics, but appear different due to a combination of variance within these intrinsics and differences in extrinsic factors, such as pose and illumination.

Image Generation Intrinsic Image Decomposition +1

Ego-Body Pose Estimation via Ego-Head Pose Estimation

1 code implementation CVPR 2023 Jiaman Li, C. Karen Liu, Jiajun Wu

In addition, collecting large-scale, high-quality datasets with paired egocentric videos and 3D human motions requires accurate motion capture devices, which often limit the variety of scenes in the videos to lab-like environments.

Benchmarking Disentanglement +1

Physically Plausible Animation of Human Upper Body from a Single Image

no code implementations9 Dec 2022 Ziyuan Huang, Zhengping Zhou, Yung-Yu Chuang, Jiajun Wu, C. Karen Liu

We present a new method for generating controllable, dynamically responsive, and photorealistic human animations.

E-MAPP: Efficient Multi-Agent Reinforcement Learning with Parallel Program Guidance

no code implementations5 Dec 2022 Can Chang, Ni Mu, Jiajun Wu, Ling Pan, Huazhe Xu

Specifically, we introduce Efficient Multi-Agent Reinforcement Learning with Parallel Program Guidance(E-MAPP), a novel framework that leverages parallel programs to guide multiple agents to efficiently accomplish goals that require planning over $10+$ stages.

Multi-agent Reinforcement Learning reinforcement-learning +2

Geoclidean: Few-Shot Generalization in Euclidean Geometry

1 code implementation30 Nov 2022 Joy Hsu, Jiajun Wu, Noah D. Goodman

In contrast, low-level and high-level visual features from standard computer vision models pretrained on natural images do not support correct generalization.


STAP: Sequencing Task-Agnostic Policies

no code implementations21 Oct 2022 Christopher Agia, Toki Migimatsu, Jiajun Wu, Jeannette Bohg

We further demonstrate how STAP can be used for task and motion planning by estimating the geometric feasibility of skill sequences provided by a task planner.

Motion Planning Task and Motion Planning

Differentiable Physics Simulation of Dynamics-Augmented Neural Objects

no code implementations17 Oct 2022 Simon Le Cleac'h, Hong-Xing Yu, Michelle Guo, Taylor A. Howell, Ruohan Gao, Jiajun Wu, Zachary Manchester, Mac Schwager

A robot can use this simulation to optimize grasps and manipulation trajectories of neural objects, or to improve the neural object models through gradient-based real-to-simulation transfer.

Friction Object

Translating a Visual LEGO Manual to a Machine-Executable Plan

no code implementations25 Jul 2022 Ruocheng Wang, Yunzhi Zhang, Jiayuan Mao, Chin-Yi Cheng, Jiajun Wu

We study the problem of translating an image-based, step-by-step assembly manual created by human designers into machine-interpretable instructions.

3D Pose Estimation Keypoint Detection

Programmatic Concept Learning for Human Motion Description and Synthesis

no code implementations CVPR 2022 Sumith Kulal, Jiayuan Mao, Alex Aiken, Jiajun Wu

We introduce Programmatic Motion Concepts, a hierarchical motion representation for human actions that captures both low-level motion and high-level description as motion concepts.

MaskViT: Masked Visual Pre-Training for Video Prediction

no code implementations23 Jun 2022 Agrim Gupta, Stephen Tian, Yunzhi Zhang, Jiajun Wu, Roberto Martín-Martín, Li Fei-Fei

This work shows that we can create good video prediction models by pre-training transformers via masked visual modeling.

Scheduling Video Prediction

BEHAVIOR in Habitat 2.0: Simulator-Independent Logical Task Description for Benchmarking Embodied AI Agents

no code implementations13 Jun 2022 Ziang Liu, Roberto Martín-Martín, Fei Xia, Jiajun Wu, Li Fei-Fei

Robots excel in performing repetitive and precision-sensitive tasks in controlled environments such as warehouses and factories, but have not been yet extended to embodied AI agents providing assistance in household tasks.


Unsupervised Discovery and Composition of Object Light Fields

no code implementations8 May 2022 Cameron Smith, Hong-Xing Yu, Sergey Zakharov, Fredo Durand, Joshua B. Tenenbaum, Jiajun Wu, Vincent Sitzmann

Neural scene representations, both continuous and discrete, have recently emerged as a powerful new paradigm for 3D scene understanding.

Novel View Synthesis Object +1

RoboCraft: Learning to See, Simulate, and Shape Elasto-Plastic Objects with Graph Networks

no code implementations5 May 2022 Haochen Shi, Huazhe Xu, Zhiao Huang, Yunzhu Li, Jiajun Wu

Our learned model-based planning framework is comparable to and sometimes better than human subjects on the tested tasks.

Model Predictive Control

Video Extrapolation in Space and Time

no code implementations4 May 2022 Yunzhi Zhang, Jiajun Wu

Novel view synthesis (NVS) and video prediction (VP) are typically considered disjoint tasks in computer vision.

Novel View Synthesis Video Prediction

Rotationally Equivariant 3D Object Detection

no code implementations CVPR 2022 Hong-Xing Yu, Jiajun Wu, Li Yi

To incorporate object-level rotation equivariance into 3D object detectors, we need a mechanism to extract equivariant features with local object-level spatial support while being able to model cross-object context information.

3D Object Detection Autonomous Driving +2

ObjectFolder 2.0: A Multisensory Object Dataset for Sim2Real Transfer

1 code implementation CVPR 2022 Ruohan Gao, Zilin Si, Yen-Yu Chang, Samuel Clarke, Jeannette Bohg, Li Fei-Fei, Wenzhen Yuan, Jiajun Wu

We present ObjectFolder 2. 0, a large-scale, multisensory dataset of common household objects in the form of implicit neural representations that significantly enhances ObjectFolder 1. 0 in three aspects.


Vision-Based Manipulators Need to Also See from Their Hands

no code implementations ICLR 2022 Kyle Hsu, Moo Jin Kim, Rafael Rafailov, Jiajun Wu, Chelsea Finn

We study how the choice of visual perspective affects learning and generalization in the context of physical manipulation from raw sensor observations.

Out-of-Distribution Generalization

Grammar-Based Grounded Lexicon Learning

no code implementations NeurIPS 2021 Jiayuan Mao, Haoyue Shi, Jiajun Wu, Roger P. Levy, Joshua B. Tenenbaum

We present Grammar-Based Grounded Lexicon Learning (G2L2), a lexicalist approach toward learning a compositional and grounded meaning representation of language from grounded data, such as paired images and texts.

Network Embedding Sentence +1

Universal Controllers with Differentiable Physics for Online System Identification

no code implementations29 Sep 2021 Michelle Guo, Wenhao Yu, Daniel Ho, Jiajun Wu, Yunfei Bai, Karen Liu, Wenlong Lu

In addition, we perform two studies showing that UC-DiffOSI operates well in environments with changing or unknown dynamics.

Domain Adaptation

Learning Rational Skills for Planning from Demonstrations and Instructions

no code implementations29 Sep 2021 Zhezheng Luo, Jiayuan Mao, Jiajun Wu, Tomas Perez, Joshua B. Tenenbaum, Leslie Pack Kaelbling

We present a framework for learning compositional, rational skill models (RatSkills) that support efficient planning and inverse planning for achieving novel goals and recognizing activities.

Efficient Training and Inference of Hypergraph Reasoning Networks

no code implementations29 Sep 2021 Guangxuan Xiao, Leslie Pack Kaelbling, Jiajun Wu, Jiayuan Mao

To leverage the sparsity in hypergraph neural networks, SpaLoc represents the grounding of relationships such as parent and grandparent as sparse tensors and uses neural networks and finite-domain quantification operations to infer new facts based on the input.

Knowledge Graphs Logical Reasoning +1

On the Opportunities and Risks of Foundation Models

2 code implementations16 Aug 2021 Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, aditi raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang

AI is undergoing a paradigm shift with the rise of models (e. g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks.

Transfer Learning

BEHAVIOR: Benchmark for Everyday Household Activities in Virtual, Interactive, and Ecological Environments

no code implementations6 Aug 2021 Sanjana Srivastava, Chengshu Li, Michael Lingelbach, Roberto Martín-Martín, Fei Xia, Kent Vainio, Zheng Lian, Cem Gokmen, Shyamal Buch, C. Karen Liu, Silvio Savarese, Hyowon Gweon, Jiajun Wu, Li Fei-Fei

We introduce BEHAVIOR, a benchmark for embodied AI with 100 activities in simulation, spanning a range of everyday household chores such as cleaning, maintenance, and food preparation.

iGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday Household Tasks

1 code implementation6 Aug 2021 Chengshu Li, Fei Xia, Roberto Martín-Martín, Michael Lingelbach, Sanjana Srivastava, Bokui Shen, Kent Vainio, Cem Gokmen, Gokul Dharan, Tanish Jain, Andrey Kurenkov, C. Karen Liu, Hyowon Gweon, Jiajun Wu, Li Fei-Fei, Silvio Savarese

We evaluate the new capabilities of iGibson 2. 0 to enable robot learning of novel tasks, in the hope of demonstrating the potential of this new simulator to support new research in embodied AI.

Imitation Learning

SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations

1 code implementation ICLR 2022 Chenlin Meng, Yutong He, Yang song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, Stefano Ermon

The key challenge is balancing faithfulness to the user input (e. g., hand-drawn colored strokes) and realism of the synthesized image.

Denoising Image Generation

Unsupervised Discovery of Object Radiance Fields

1 code implementation ICLR 2022 Hong-Xing Yu, Leonidas J. Guibas, Jiajun Wu

We study the problem of inferring an object-centric scene representation from a single image, aiming to derive a representation that explains the image formation process, captures the scene's 3D nature, and is learned without supervision.

Novel View Synthesis Object +1

Temporal and Object Quantification Networks

no code implementations10 Jun 2021 Jiayuan Mao, Zhezheng Luo, Chuang Gan, Joshua B. Tenenbaum, Jiajun Wu, Leslie Pack Kaelbling, Tomer D. Ullman

We present Temporal and Object Quantification Networks (TOQ-Nets), a new class of neuro-symbolic networks with a structural bias that enables them to learn to recognize complex relational-temporal events.

Object Temporal Sequences

Hierarchical Motion Understanding via Motion Programs

no code implementations CVPR 2021 Sumith Kulal, Jiayuan Mao, Alex Aiken, Jiajun Wu

We posit that adding higher-level motion primitives, which can capture natural coarser units of motion such as backswing or follow-through, can be used to improve downstream analysis tasks.

Video Editing Video Prediction

De-rendering the World's Revolutionary Artefacts

1 code implementation CVPR 2021 Shangzhe Wu, Ameesh Makadia, Jiajun Wu, Noah Snavely, Richard Tucker, Angjoo Kanazawa

Recent works have shown exciting results in unsupervised image de-rendering -- learning to decompose 3D shape, appearance, and lighting from single-image collections without explicit supervision.

Repopulating Street Scenes

no code implementations CVPR 2021 Yifan Wang, Andrew Liu, Richard Tucker, Jiajun Wu, Brian L. Curless, Steven M. Seitz, Noah Snavely

We present a framework for automatically reconfiguring images of street scenes by populating, depopulating, or repopulating them with objects such as pedestrians or vehicles.

Autonomous Driving

Learning Temporal Dynamics from Cycles in Narrated Video

no code implementations ICCV 2021 Dave Epstein, Jiajun Wu, Cordelia Schmid, Chen Sun

Learning to model how the world changes as time elapses has proven a challenging problem for the computer vision community.

Temporal and Object Quantification Nets

no code implementations1 Jan 2021 Jiayuan Mao, Zhezheng Luo, Chuang Gan, Joshua B. Tenenbaum, Jiajun Wu, Leslie Pack Kaelbling, Tomer Ullman

We aim to learn generalizable representations for complex activities by quantifying over both entities and time, as in “the kicker is behind all the other players,” or “the player controls the ball until it moves toward the goal.” Such a structural inductive bias of object relations, object quantification, and temporal orders will enable the learned representation to generalize to situations with varying numbers of agents, objects, and time courses.

Event Detection Inductive Bias +1

Object-Centric Diagnosis of Visual Reasoning

no code implementations21 Dec 2020 Jianwei Yang, Jiayuan Mao, Jiajun Wu, Devi Parikh, David D. Cox, Joshua B. Tenenbaum, Chuang Gan

In contrast, symbolic and modular models have a relatively better grounding and robustness, though at the cost of accuracy.

Object Question Answering +2

Neural Radiance Flow for 4D View Synthesis and Video Processing

1 code implementation ICCV 2021 Yilun Du, Yinan Zhang, Hong-Xing Yu, Joshua B. Tenenbaum, Jiajun Wu

We present a method, Neural Radiance Flow (NeRFlow), to learn a 4D spatial-temporal representation of a dynamic scene from a set of RGB images.

Image Super-Resolution Temporal View Synthesis

Object-Centric Neural Scene Rendering

no code implementations15 Dec 2020 Michelle Guo, Alireza Fathi, Jiajun Wu, Thomas Funkhouser

We present a method for composing photorealistic scenes from captured images of objects.


Multi-Plane Program Induction with 3D Box Priors

no code implementations NeurIPS 2020 Yikai Li, Jiayuan Mao, Xiuming Zhang, William T. Freeman, Joshua B. Tenenbaum, Noah Snavely, Jiajun Wu

We consider two important aspects in understanding and editing images: modeling regular, program-like texture or patterns in 2D planes, and 3D posing of these planes in the scene.

Program induction Program Synthesis

Learning 3D Dynamic Scene Representations for Robot Manipulation

2 code implementations3 Nov 2020 Zhenjia Xu, Zhanpeng He, Jiajun Wu, Shuran Song

3D scene representation for robot manipulation should capture three key object properties: permanency -- objects that become occluded over time continue to exist; amodal completeness -- objects have 3D occupancy, even if only partial observations are available; spatiotemporal continuity -- the movement of each object is continuous over space and time.

Model Predictive Control Robot Manipulation

Multi-Frame to Single-Frame: Knowledge Distillation for 3D Object Detection

no code implementations24 Sep 2020 Yue Wang, Alireza Fathi, Jiajun Wu, Thomas Funkhouser, Justin Solomon

A common dilemma in 3D object detection for autonomous driving is that high-quality, dense point clouds are only available during training, but not testing.

3D Object Detection Autonomous Driving +3

End-to-End Optimization of Scene Layout

1 code implementation CVPR 2020 Andrew Luo, Zhoutong Zhang, Jiajun Wu, Joshua B. Tenenbaum

Experiments suggest that our model achieves higher accuracy and diversity in conditional scene synthesis and allows exemplar-based scene generation from various input forms.

Indoor Scene Reconstruction Indoor Scene Synthesis +2

Learning Physical Graph Representations from Visual Scenes

1 code implementation NeurIPS 2020 Daniel M. Bear, Chaofei Fan, Damian Mrowca, Yunzhu Li, Seth Alter, Aran Nayebi, Jeremy Schwartz, Li Fei-Fei, Jiajun Wu, Joshua B. Tenenbaum, Daniel L. K. Yamins

To overcome these limitations, we introduce the idea of Physical Scene Graphs (PSGs), which represent scenes as hierarchical graphs, with nodes in the hierarchy corresponding intuitively to object parts at different scales, and edges to physical connections between parts.

Object Object Categorization +1

When is Particle Filtering Efficient for Planning in Partially Observed Linear Dynamical Systems?

no code implementations10 Jun 2020 Simon S. Du, Wei Hu, Zhiyuan Li, Ruoqi Shen, Zhao Song, Jiajun Wu

Though errors in past actions may affect the future, we are able to bound the number of particles needed so that the long-run reward of the policy based on particle filtering is close to that based on exact inference.

Decision Making

Improving Learning Efficiency for Wireless Resource Allocation with Symmetric Prior

no code implementations18 May 2020 Chengjian Sun, Jiajun Wu, Chenyang Yang

The samples required to train a DNN after ranking can be reduced by $15 \sim 2, 400$ folds to achieve the same system performance as the counterpart without using prior.

Deep Audio Priors Emerge From Harmonic Convolutional Networks

no code implementations ICLR 2020 Zhoutong Zhang, Yunyun Wang, Chuang Gan, Jiajun Wu, Joshua B. Tenenbaum, Antonio Torralba, William T. Freeman

We show that networks using Harmonic Convolution can reliably model audio priors and achieve high performance in unsupervised audio restoration tasks.

Visual Grounding of Learned Physical Models

1 code implementation ICML 2020 Yunzhu Li, Toru Lin, Kexin Yi, Daniel M. Bear, Daniel L. K. Yamins, Jiajun Wu, Joshua B. Tenenbaum, Antonio Torralba

The abilities to perform physical reasoning and to adapt to new environments, while intrinsic to humans, remain challenging to state-of-the-art computational models.

Visual Grounding

Visual Concept-Metaconcept Learning

1 code implementation NeurIPS 2019 Chi Han, Jiayuan Mao, Chuang Gan, Joshua B. Tenenbaum, Jiajun Wu

Humans reason with concepts and metaconcepts: we recognize red and green from visual input; we also understand that they describe the same property of objects (i. e., the color).

Look, Listen, and Act: Towards Audio-Visual Embodied Navigation

1 code implementation25 Dec 2019 Chuang Gan, Yiwei Zhang, Jiajun Wu, Boqing Gong, Joshua B. Tenenbaum

In this paper, we attempt to approach the problem of Audio-Visual Embodied Navigation, the task of planning the shortest path from a random starting location in a scene to the sound source in an indoor environment, given only raw egocentric visual and audio sensory data.


Modeling Expectation Violation in Intuitive Physics with Coarse Probabilistic Object Representations

1 code implementation NeurIPS 2019 Kevin Smith, Lingjie Mei, Shunyu Yao, Jiajun Wu, Elizabeth Spelke, Josh Tenenbaum, Tomer Ullman

We also present a new test set for measuring violations of physical expectations, using a range of scenarios derived from developmental psychology.

Scene Understanding

Proactive Optimization with Machine Learning: Femto-caching with Future Content Popularity

no code implementations29 Oct 2019 Jiajun Wu, Chengjian Sun, Chenyang Yang

In this paper, we introduce a proactive optimization framework for anticipatory resource allocation, where the future information is implicitly predicted under the same objective with the policy optimization in a single step.

BIG-bench Machine Learning Stochastic Optimization

Entity Abstraction in Visual Model-Based Reinforcement Learning

1 code implementation28 Oct 2019 Rishi Veerapaneni, John D. Co-Reyes, Michael Chang, Michael Janner, Chelsea Finn, Jiajun Wu, Joshua B. Tenenbaum, Sergey Levine

This paper tests the hypothesis that modeling a scene in terms of entities and their local interactions, as opposed to modeling the scene globally, provides a significant benefit in generalizing to physical tasks in a combinatorial space the learner has not encountered before.

Model-based Reinforcement Learning Object +5

Learning Compositional Koopman Operators for Model-Based Control

no code implementations ICLR 2020 Yunzhu Li, Hao He, Jiajun Wu, Dina Katabi, Antonio Torralba

Finding an embedding space for a linear approximation of a nonlinear dynamical system enables efficient system identification and control synthesis.

CLEVRER: CoLlision Events for Video REpresentation and Reasoning

3 code implementations ICLR 2020 Kexin Yi, Chuang Gan, Yunzhu Li, Pushmeet Kohli, Jiajun Wu, Antonio Torralba, Joshua B. Tenenbaum

While these models thrive on the perception-based task (descriptive), they perform poorly on the causal tasks (explanatory, predictive and counterfactual), suggesting that a principled approach for causal reasoning should incorporate the capability of both perceiving complex visual and language inputs, and understanding the underlying dynamics and causal relations.

counterfactual Descriptive +1

DualSMC: Tunneling Differentiable Filtering and Planning under Continuous POMDPs

1 code implementation28 Sep 2019 Yunbo Wang, Bo Liu, Jiajun Wu, Yuke Zhu, Simon S. Du, Li Fei-Fei, Joshua B. Tenenbaum

A major difficulty of solving continuous POMDPs is to infer the multi-modal distribution of the unobserved true states and to make the planning algorithm dependent on the perceived uncertainty.

Continuous Control

Program-Guided Image Manipulators

no code implementations ICCV 2019 Jiayuan Mao, Xiuming Zhang, Yikai Li, William T. Freeman, Joshua B. Tenenbaum, Jiajun Wu

Humans are capable of building holistic representations for images at various levels, from local objects, to pairwise relations, to global structures.

Image Inpainting

Neurally-Guided Structure Inference

no code implementations17 Jun 2019 Sidi Lu, Jiayuan Mao, Joshua B. Tenenbaum, Jiajun Wu

In this paper, we propose a hybrid inference algorithm, the Neurally-Guided Structure Inference (NG-SI), keeping the advantages of both search-based and data-driven methods.

Predicting the Present and Future States of Multi-agent Systems from Partially-observed Visual Data

no code implementations ICLR 2019 Chen Sun, Per Karlsson, Jiajun Wu, Joshua B. Tenenbaum, Kevin Murphy

We present a method which learns to integrate temporal information, from a learned dynamics model, with ambiguous visual information, from a learned vision model, in the context of interacting agents.

Learning to Describe Scenes with Programs

no code implementations ICLR 2019 Yunchao Liu, Zheng Wu, Daniel Ritchie, William T. Freeman, Joshua B. Tenenbaum, Jiajun Wu

We are able to understand the higher-level, abstract regularities within the scene such as symmetry and repetition.

Modeling Parts, Structure, and System Dynamics via Predictive Learning

no code implementations ICLR 2019 Zhenjia Xu, Zhijian Liu, Chen Sun, Kevin Murphy, William T. Freeman, Joshua B. Tenenbaum, Jiajun Wu

Humans easily recognize object parts and their hierarchical structure by watching how they move; they can then predict how each part moves in the future.


Combining Physical Simulators and Object-Based Networks for Control

no code implementations13 Apr 2019 Anurag Ajay, Maria Bauza, Jiajun Wu, Nima Fazeli, Joshua B. Tenenbaum, Alberto Rodriguez, Leslie P. Kaelbling

Physics engines play an important role in robot planning and control; however, many real-world control problems involve complex contact dynamics that cannot be characterized analytically.


Unsupervised Discovery of Parts, Structure, and Dynamics

no code implementations12 Mar 2019 Zhenjia Xu, Zhijian Liu, Chen Sun, Kevin Murphy, William T. Freeman, Joshua B. Tenenbaum, Jiajun Wu

Humans easily recognize object parts and their hierarchical structure by watching how they move; they can then predict how each part moves in the future.


Stochastic Prediction of Multi-Agent Interactions from Partial Observations

no code implementations25 Feb 2019 Chen Sun, Per Karlsson, Jiajun Wu, Joshua B. Tenenbaum, Kevin Murphy

We present a method that learns to integrate temporal information, from a learned dynamics model, with ambiguous visual information, from a learned vision model, in the context of interacting agents.

Learning to Infer and Execute 3D Shape Programs

no code implementations ICLR 2019 Yonglong Tian, Andrew Luo, Xingyuan Sun, Kevin Ellis, William T. Freeman, Joshua B. Tenenbaum, Jiajun Wu

Human perception of 3D shapes goes beyond reconstructing them as a set of points or a composition of geometric primitives: we also effortlessly understand higher-level shape structure such as the repetition and reflective symmetry of object parts.

Learning to Reconstruct Shapes from Unseen Classes

no code implementations NeurIPS 2018 Xiuming Zhang, Zhoutong Zhang, Chengkai Zhang, Joshua B. Tenenbaum, William T. Freeman, Jiajun Wu

From a single image, humans are able to perceive the full 3D shape of an object by exploiting learned shape priors from everyday life.

3D Reconstruction