Search Results for author: Danfei Xu

Found 35 papers, 13 papers with code

A Survey of Optimization-based Task and Motion Planning: From Classical To Learning Approaches

no code implementations3 Apr 2024 Zhigen Zhao, Shuo Cheng, Yan Ding, Ziyi Zhou, Shiqi Zhang, Danfei Xu, Ye Zhao

Task and Motion Planning (TAMP) integrates high-level task planning and low-level motion planning to equip robots with the autonomy to effectively reason over long-horizon, dynamic tasks.

Motion Planning Task and Motion Planning

InstantSplat: Unbounded Sparse-view Pose-free Gaussian Splatting in 40 Seconds

no code implementations29 Mar 2024 Zhiwen Fan, Wenyan Cong, Kairun Wen, Kevin Wang, Jian Zhang, Xinghao Ding, Danfei Xu, Boris Ivanovic, Marco Pavone, Georgios Pavlakos, Zhangyang Wang, Yue Wang

This pre-processing is usually conducted via a Structure-from-Motion (SfM) pipeline, a procedure that can be slow and unreliable, particularly in sparse-view scenarios with insufficient matched features for accurate reconstruction.

Novel View Synthesis SSIM

NOD-TAMP: Multi-Step Manipulation Planning with Neural Object Descriptors

no code implementations2 Nov 2023 Shuo Cheng, Caelan Garrett, Ajay Mandlekar, Danfei Xu

Developing intelligent robots for complex manipulation tasks in household and factory settings remains challenging due to long-horizon tasks, contact-rich manipulation, and the need to generalize across a wide variety of object shapes and scene layouts.

Motion Planning Object +1

Human-in-the-Loop Task and Motion Planning for Imitation Learning

no code implementations24 Oct 2023 Ajay Mandlekar, Caelan Garrett, Danfei Xu, Dieter Fox

Finally, we collected 2. 1K demos with HITL-TAMP across 12 contact-rich, long-horizon tasks and show that the system often produces near-perfect agents.

Imitation Learning Motion Planning +1

Learning to Discern: Imitating Heterogeneous Human Demonstrations with Preference and Representation Learning

no code implementations22 Oct 2023 Sachit Kuhar, Shuo Cheng, Shivang Chopra, Matthew Bronars, Danfei Xu

Furthermore, the intrinsic heterogeneity in human behavior can produce equally successful but disparate demonstrations, further exacerbating the challenge of discerning demonstration quality.

Imitation Learning Representation Learning

Evolutionary Curriculum Training for DRL-Based Navigation Systems

no code implementations15 Jun 2023 Max Asselmeier, Zhaoyi Li, Kelin Yu, Danfei Xu

Additionally, an evolutionary training environment generates all the curriculum to improve the DRL model's inadequate skills tested in the previous evaluation.

Collision Avoidance

Language-Guided Traffic Simulation via Scene-Level Diffusion

no code implementations10 Jun 2023 Ziyuan Zhong, Davis Rempe, Yuxiao Chen, Boris Ivanovic, Yulong Cao, Danfei Xu, Marco Pavone, Baishakhi Ray

Realistic and controllable traffic simulation is a core capability that is necessary to accelerate autonomous vehicle (AV) development.

Language Modelling Large Language Model

Partial-View Object View Synthesis via Filtered Inversion

no code implementations3 Apr 2023 Fan-Yun Sun, Jonathan Tremblay, Valts Blukis, Kevin Lin, Danfei Xu, Boris Ivanovic, Peter Karkus, Stan Birchfield, Dieter Fox, Ruohan Zhang, Yunzhu Li, Jiajun Wu, Marco Pavone, Nick Haber

At inference, given one or more views of a novel real-world object, FINV first finds a set of latent codes for the object by inverting the generative model from multiple initial seeds.


LEAGUE: Guided Skill Learning and Abstraction for Long-Horizon Manipulation

no code implementations23 Oct 2022 Shuo Cheng, Danfei Xu

We also show that the learned skills can be reused to accelerate learning in new tasks domains and transfer to a physical robot platform.

Motion Planning Reinforcement Learning (RL) +1

ProgPrompt: Generating Situated Robot Task Plans using Large Language Models

no code implementations22 Sep 2022 Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, Animesh Garg

To ameliorate that effort, large language models (LLMs) can be used to score potential next actions during task planning, and even generate action sequences directly, given an instruction in natural language with no additional domain information.

BITS: Bi-level Imitation for Traffic Simulation

1 code implementation26 Aug 2022 Danfei Xu, Yuxiao Chen, Boris Ivanovic, Marco Pavone

We empirically validate our method, named Bi-level Imitation for Traffic Simulation (BITS), with scenarios from two large-scale driving datasets and show that BITS achieves balanced traffic simulation performance in realism, diversity, and long-horizon stability.

Autonomous Vehicles

Robust Trajectory Prediction against Adversarial Attacks

no code implementations29 Jul 2022 Yulong Cao, Danfei Xu, Xinshuo Weng, Zhuoqing Mao, Anima Anandkumar, Chaowei Xiao, Marco Pavone

We demonstrate that our method is able to improve the performance by 46% on adversarial data and at the cost of only 3% performance degradation on clean data, compared to the model trained with clean data.

Autonomous Driving Data Augmentation +1

Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration

1 code implementation13 Aug 2021 Chen Wang, Claudia Pérez-D'Arpino, Danfei Xu, Li Fei-Fei, C. Karen Liu, Silvio Savarese

Our method co-optimizes a human policy and a robot policy in an interactive learning process: the human policy learns to generate diverse and plausible collaborative behaviors from demonstrations while the robot policy learns to assist by estimating the unobserved latent strategy of its human collaborator.

What Matters in Learning from Offline Human Demonstrations for Robot Manipulation

1 code implementation6 Aug 2021 Ajay Mandlekar, Danfei Xu, Josiah Wong, Soroush Nasiriany, Chen Wang, Rohun Kulkarni, Li Fei-Fei, Silvio Savarese, Yuke Zhu, Roberto Martín-Martín

Based on the study, we derive a series of lessons including the sensitivity to different algorithmic design choices, the dependence on the quality of the demonstrations, and the variability based on the stopping criteria due to the different objectives in training and evaluation.

Imitation Learning reinforcement-learning +2

Generalization Through Hand-Eye Coordination: An Action Space for Learning Spatially-Invariant Visuomotor Control

no code implementations28 Feb 2021 Chen Wang, Rui Wang, Ajay Mandlekar, Li Fei-Fei, Silvio Savarese, Danfei Xu

Key to such capability is hand-eye coordination, a cognitive ability that enables humans to adaptively direct their movements at task-relevant objects and be invariant to the objects' absolute spatial location.

Imitation Learning Zero-shot Generalization

Human-in-the-Loop Imitation Learning using Remote Teleoperation

no code implementations12 Dec 2020 Ajay Mandlekar, Danfei Xu, Roberto Martín-Martín, Yuke Zhu, Li Fei-Fei, Silvio Savarese

We develop a simple and effective algorithm to train the policy iteratively on new data collected by the system that encourages the policy to learn how to traverse bottlenecks through the interventions.

Imitation Learning Robot Manipulation

Learning to Generalize Across Long-Horizon Tasks from Human Demonstrations

no code implementations13 Mar 2020 Ajay Mandlekar, Danfei Xu, Roberto Martín-Martín, Silvio Savarese, Li Fei-Fei

In the second stage of GTI, we collect a small set of rollouts from the unconditioned stochastic policy of the first stage, and train a goal-directed agent to generalize to novel start and goal configurations.

Imitation Learning

Positive-Unlabeled Reward Learning

1 code implementation1 Nov 2019 Danfei Xu, Misha Denil

Learning reward functions from data is a promising path towards achieving scalable Reinforcement Learning (RL) for robotics.

Imitation Learning Reinforcement Learning (RL)

Regression Planning Networks

1 code implementation NeurIPS 2019 Danfei Xu, Roberto Martín-Martín, De-An Huang, Yuke Zhu, Silvio Savarese, Li Fei-Fei

Recent learning-to-plan methods have shown promising results on planning directly from observation space.


Situational Fusion of Visual Representation for Visual Navigation

no code implementations ICCV 2019 Bokui Shen, Danfei Xu, Yuke Zhu, Leonidas J. Guibas, Li Fei-Fei, Silvio Savarese

A complex visual navigation task puts an agent in different situations which call for a diverse range of visual perception abilities.

Visual Navigation

Continuous Relaxation of Symbolic Planner for One-Shot Imitation Learning

no code implementations16 Aug 2019 De-An Huang, Danfei Xu, Yuke Zhu, Animesh Garg, Silvio Savarese, Li Fei-Fei, Juan Carlos Niebles

The key technical challenge is that the symbol grounding is prone to error with limited training data and leads to subsequent symbolic planning failures.

Imitation Learning

Procedure Planning in Instructional Videos

no code implementations ECCV 2020 Chien-Yi Chang, De-An Huang, Danfei Xu, Ehsan Adeli, Li Fei-Fei, Juan Carlos Niebles

In this paper, we study the problem of procedure planning in instructional videos, which can be seen as a step towards enabling autonomous agents to plan for complex tasks in everyday settings such as cooking.

Neural Task Graphs: Generalizing to Unseen Tasks from a Single Video Demonstration

no code implementations CVPR 2019 De-An Huang, Suraj Nair, Danfei Xu, Yuke Zhu, Animesh Garg, Li Fei-Fei, Silvio Savarese, Juan Carlos Niebles

We hypothesize that to successfully generalize to unseen complex tasks from a single video demonstration, it is necessary to explicitly incorporate the compositional structure of the tasks into the model.

Neural Task Programming: Learning to Generalize Across Hierarchical Tasks

1 code implementation4 Oct 2017 Danfei Xu, Suraj Nair, Yuke Zhu, Julian Gao, Animesh Garg, Li Fei-Fei, Silvio Savarese

In this work, we propose a novel robot learning framework called Neural Task Programming (NTP), which bridges the idea of few-shot learning from demonstration and neural program induction.

Few-Shot Learning Program induction +1

Scene Graph Generation by Iterative Message Passing

5 code implementations CVPR 2017 Danfei Xu, Yuke Zhu, Christopher B. Choy, Li Fei-Fei

In this work, we explicitly model the objects and their relationships using scene graphs, a visually-grounded graphical structure of an image.

Graph Generation Panoptic Scene Graph Generation

Model-Driven Feed-Forward Prediction for Manipulation of Deformable Objects

no code implementations15 Jul 2016 Yinxiao Li, Yan Wang, Yonghao Yue, Danfei Xu, Michael Case, Shih-Fu Chang, Eitan Grinspun, Peter Allen

A fully featured 3D model of the garment is constructed in real-time and volumetric features are then used to obtain the most similar model in the database to predict the object category and pose.

Object Pose Estimation +1

3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction

14 code implementations2 Apr 2016 Christopher B. Choy, Danfei Xu, JunYoung Gwak, Kevin Chen, Silvio Savarese

Inspired by the recent success of methods that employ shape priors to achieve robust 3D reconstructions, we propose a novel recurrent neural network architecture that we call the 3D Recurrent Reconstruction Neural Network (3D-R2N2).

3D Object Reconstruction 3D Reconstruction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.