Search Results for author: Ziyuan Jiao

Found 10 papers, 3 papers with code

Reconstructing Interactive 3D Scenes by Panoptic Mapping and CAD Model Alignments

1 code implementation30 Mar 2021 Muzhi Han, Zeyu Zhang, Ziyuan Jiao, Xu Xie, Yixin Zhu, Song-Chun Zhu, Hangxin Liu

In this paper, we rethink the problem of scene reconstruction from an embodied agent's perspective: While the classic view focuses on the reconstruction accuracy, our new perspective emphasizes the underlying functions and constraints such that the reconstructed scenes provide \em{actionable} information for simulating \em{interactions} with agents.

Common Sense Reasoning

LLM3:Large Language Model-based Task and Motion Planning with Motion Failure Reasoning

1 code implementation18 Mar 2024 Shu Wang, Muzhi Han, Ziyuan Jiao, Zeyu Zhang, Ying Nian Wu, Song-Chun Zhu, Hangxin Liu

Through a series of simulations in a box-packing domain, we quantitatively demonstrate the effectiveness of LLM^3 in solving TAMP problems and the efficiency in selecting action parameters.

Language Modelling Large Language Model +2

Sequential Manipulation Planning on Scene Graph

1 code implementation10 Jul 2022 Ziyuan Jiao, Yida Niu, Zeyu Zhang, Song-Chun Zhu, Yixin Zhu, Hangxin Liu

We devise a 3D scene graph representation, contact graph+ (cg+), for efficient sequential task planning.

Stochastic Optimization valid

Congestion-aware Evacuation Routing using Augmented Reality Devices

no code implementations25 Apr 2020 Zeyu Zhang, Hangxin Liu, Ziyuan Jiao, Yixin Zhu, Song-Chun Zhu

We present a congestion-aware routing solution for indoor evacuation, which produces real-time individual-customized evacuation routes among multiple destinations while keeping tracks of all evacuees' locations.

Understanding Physical Effects for Effective Tool-use

no code implementations30 Jun 2022 Zeyu Zhang, Ziyuan Jiao, Weiqi Wang, Yixin Zhu, Song-Chun Zhu, Hangxin Liu

We present a robot learning and planning framework that produces an effective tool-use strategy with the least joint efforts, capable of handling objects different from training.

Motion Planning regression +1

A Reconfigurable Data Glove for Reconstructing Physical and Virtual Grasps

no code implementations14 Jan 2023 Hangxin Liu, Zeyu Zhang, Ziyuan Jiao, Zhenliang Zhang, Minchen Li, Chenfanfu Jiang, Yixin Zhu, Song-Chun Zhu

In this work, we present a reconfigurable data glove design to capture different modes of human hand-object interactions, which are critical in training embodied artificial intelligence (AI) agents for fine manipulation tasks.

Rearrange Indoor Scenes for Human-Robot Co-Activity

no code implementations10 Mar 2023 Weiqi Wang, Zihang Zhao, Ziyuan Jiao, Yixin Zhu, Song-Chun Zhu, Hangxin Liu

We present an optimization-based framework for rearranging indoor furniture to accommodate human-robot co-activities better.

Get the Ball Rolling: Alerting Autonomous Robots When to Help to Close the Healthcare Loop

no code implementations5 Nov 2023 Jiaxin Shen, Yanyao Liu, ZiMing Wang, Ziyuan Jiao, Yufeng Chen, Wenjuan Han

To facilitate the advancement of research in healthcare robots without human intervention or commands, we introduce the Autonomous Helping Challenge, along with a crowd-sourcing large-scale dataset.

On the Emergence of Symmetrical Reality

no code implementations26 Jan 2024 Zhenliang Zhang, Zeyu Zhang, Ziyuan Jiao, Yao Su, Hangxin Liu, Wei Wang, Song-Chun Zhu

Artificial intelligence (AI) has revolutionized human cognitive abilities and facilitated the development of new AI entities capable of interacting with humans in both physical and virtual environments.

Mixed Reality

Closed-Loop Open-Vocabulary Mobile Manipulation with GPT-4V

no code implementations16 Apr 2024 Peiyuan Zhi, Zhiyuan Zhang, Muzhi Han, Zeyu Zhang, Zhitian Li, Ziyuan Jiao, Baoxiong Jia, Siyuan Huang

Autonomous robot navigation and manipulation in open environments require reasoning and replanning with closed-loop feedback.

Instruction Following Multimodal Reasoning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.