Search Results for author: Hangxin Liu

Found 11 papers, 3 papers with code

Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike Common Sense

no code implementations20 Apr 2020 Yixin Zhu, Tao Gao, Lifeng Fan, Siyuan Huang, Mark Edmonds, Hangxin Liu, Feng Gao, Chi Zhang, Siyuan Qi, Ying Nian Wu, Joshua B. Tenenbaum, Song-Chun Zhu

We demonstrate the power of this perspective to develop cognitive AI systems with humanlike common sense by showing how to observe and apply FPICU with little training data to solve a wide range of challenging tasks, including tool use, planning, utility inference, and social learning.

Common Sense Reasoning Small Data Image Classification

Congestion-aware Evacuation Routing using Augmented Reality Devices

no code implementations25 Apr 2020 Zeyu Zhang, Hangxin Liu, Ziyuan Jiao, Yixin Zhu, Song-Chun Zhu

We present a congestion-aware routing solution for indoor evacuation, which produces real-time individual-customized evacuation routes among multiple destinations while keeping tracks of all evacuees' locations.

Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs

no code implementations25 Apr 2020 Tao Yuan, Hangxin Liu, Lifeng Fan, Zilong Zheng, Tao Gao, Yixin Zhu, Song-Chun Zhu

Aiming to understand how human (false-)belief--a core socio-cognitive ability--would affect human interactions with robots, this paper proposes to adopt a graphical model to unify the representation of object states, robot knowledge, and human (false-)beliefs.

Object Object Tracking

Reconstructing Interactive 3D Scenes by Panoptic Mapping and CAD Model Alignments

1 code implementation30 Mar 2021 Muzhi Han, Zeyu Zhang, Ziyuan Jiao, Xu Xie, Yixin Zhu, Song-Chun Zhu, Hangxin Liu

In this paper, we rethink the problem of scene reconstruction from an embodied agent's perspective: While the classic view focuses on the reconstruction accuracy, our new perspective emphasizes the underlying functions and constraints such that the reconstructed scenes provide \em{actionable} information for simulating \em{interactions} with agents.

Common Sense Reasoning

Understanding Physical Effects for Effective Tool-use

no code implementations30 Jun 2022 Zeyu Zhang, Ziyuan Jiao, Weiqi Wang, Yixin Zhu, Song-Chun Zhu, Hangxin Liu

We present a robot learning and planning framework that produces an effective tool-use strategy with the least joint efforts, capable of handling objects different from training.

Motion Planning regression +1

Sequential Manipulation Planning on Scene Graph

1 code implementation10 Jul 2022 Ziyuan Jiao, Yida Niu, Zeyu Zhang, Song-Chun Zhu, Yixin Zhu, Hangxin Liu

We devise a 3D scene graph representation, contact graph+ (cg+), for efficient sequential task planning.

Stochastic Optimization valid

A Reconfigurable Data Glove for Reconstructing Physical and Virtual Grasps

no code implementations14 Jan 2023 Hangxin Liu, Zeyu Zhang, Ziyuan Jiao, Zhenliang Zhang, Minchen Li, Chenfanfu Jiang, Yixin Zhu, Song-Chun Zhu

In this work, we present a reconfigurable data glove design to capture different modes of human hand-object interactions, which are critical in training embodied artificial intelligence (AI) agents for fine manipulation tasks.

Rearrange Indoor Scenes for Human-Robot Co-Activity

no code implementations10 Mar 2023 Weiqi Wang, Zihang Zhao, Ziyuan Jiao, Yixin Zhu, Song-Chun Zhu, Hangxin Liu

We present an optimization-based framework for rearranging indoor furniture to accommodate human-robot co-activities better.

On the Emergence of Symmetrical Reality

no code implementations26 Jan 2024 Zhenliang Zhang, Zeyu Zhang, Ziyuan Jiao, Yao Su, Hangxin Liu, Wei Wang, Song-Chun Zhu

Artificial intelligence (AI) has revolutionized human cognitive abilities and facilitated the development of new AI entities capable of interacting with humans in both physical and virtual environments.

Mixed Reality

LLM3:Large Language Model-based Task and Motion Planning with Motion Failure Reasoning

1 code implementation18 Mar 2024 Shu Wang, Muzhi Han, Ziyuan Jiao, Zeyu Zhang, Ying Nian Wu, Song-Chun Zhu, Hangxin Liu

Through a series of simulations in a box-packing domain, we quantitatively demonstrate the effectiveness of LLM^3 in solving TAMP problems and the efficiency in selecting action parameters.

Language Modelling Large Language Model +2

Driving Animatronic Robot Facial Expression From Speech

no code implementations19 Mar 2024 Boren Li, Hang Li, Hangxin Liu

The proposed approach is capable of generating highly realistic, real-time facial expressions from speech on an animatronic face, significantly advancing robots' ability to replicate nuanced human expressions for natural interaction.

Motion Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.