Search Results for author: Zeqi Xiao

Found 6 papers, 3 papers with code

Trajectory Attention for Fine-grained Video Motion Control

no code implementations28 Nov 2024 Zeqi Xiao, Wenqi Ouyang, Yifan Zhou, Shuai Yang, Lei Yang, Jianlou Si, Xingang Pan

This paper introduces trajectory attention, a novel approach that performs attention along available pixel trajectories for fine-grained camera motion control.

Inductive Bias Video Editing +1

CooHOI: Learning Cooperative Human-Object Interaction with Manipulated Object Dynamics

no code implementations20 Jun 2024 Jiawei Gao, Ziqin Wang, Zeqi Xiao, Jingbo Wang, Tai Wang, Jinkun Cao, Xiaolin Hu, Si Liu, Jifeng Dai, Jiangmiao Pang

Given the scarcity of motion capture data on multi-humanoid collaboration and the efficiency challenges associated with multi-agent learning, these tasks cannot be straightforwardly addressed using training paradigms designed for single-agent scenarios.

Human-Object Interaction Detection Humanoid Control +2

Video Diffusion Models are Training-free Motion Interpreter and Controller

no code implementations23 May 2024 Zeqi Xiao, Yifan Zhou, Shuai Yang, Xingang Pan

MOFT provides a distinct set of benefits, including the ability to encode comprehensive motion information with clear interpretability, extraction without the need for training, and generalizability across diverse architectures.

Video Generation

An Empirical Study of Training State-of-the-Art LiDAR Segmentation Models

1 code implementation23 May 2024 Jiahao Sun, Chunmei Qing, Xiang Xu, Lingdong Kong, Youquan Liu, Li Li, Chenming Zhu, Jingwei Zhang, Zeqi Xiao, Runnan Chen, Tai Wang, Wenwei Zhang, Kai Chen

In the rapidly evolving field of autonomous driving, precise segmentation of LiDAR data is crucial for understanding complex 3D environments.

Autonomous Driving Benchmarking +3

Unified Human-Scene Interaction via Prompted Chain-of-Contacts

1 code implementation14 Sep 2023 Zeqi Xiao, Tai Wang, Jingbo Wang, Jinkun Cao, Wenwei Zhang, Bo Dai, Dahua Lin, Jiangmiao Pang

Based on the definition, UniHSI constitutes a Large Language Model (LLM) Planner to translate language prompts into task plans in the form of CoC, and a Unified Controller that turns CoC into uniform task execution.

Language Modeling Language Modelling +1

Cannot find the paper you are looking for? You can Submit a new open access paper.