no code implementations • 27 Nov 2024 • Libin Liu, Shen Chen, Sen Jia, Jingzhe Shi, Zhongyu Jiang, Can Jin, Wu Zongkai, Jenq-Neng Hwang, Lei LI
Spatial intelligence is foundational to AI systems that interact with the physical world, particularly in 3D scene generation and spatial comprehension.
1 code implementation • 16 May 2024 • Zeyi Zhang, Tenglong Ao, Yuyao Zhang, Qingzhe Gao, Chuan Lin, Baoquan Chen, Libin Liu
In this work, we present Semantic Gesticulator, a novel framework designed to synthesize realistic gestures accompanying speech with strong semantic correspondence.
no code implementations • 18 Mar 2024 • Tingyang Zhang, Qingzhe Gao, Weiyu Li, Libin Liu, Baoquan Chen
In this work, we propose a method to build animatable 3D Gaussian Splatting from monocular video with diffusion priors.
no code implementations • 1 Nov 2023 • Wenyang Hu, Kai Liu, Libin Liu, Huiliang Shang
Human pose assessment and correction play a crucial role in applications across various fields, including computer vision, robotics, sports analysis, healthcare, and entertainment.
no code implementations • 16 Oct 2023 • Heyuan Yao, Zhenhua Song, Yuyang Zhou, Tenglong Ao, Baoquan Chen, Libin Liu
In this work, we present MoConVQ, a novel unified framework for physics-based motion control leveraging scalable discrete representations.
no code implementations • 29 Mar 2023 • Bin Feng, Tenglong Ao, Zequn Liu, Wei Ju, Libin Liu, Ming Zhang
How to automatically synthesize natural-looking dance movements based on a piece of music is an incrementally popular yet challenging task.
1 code implementation • 26 Mar 2023 • Tenglong Ao, Zeyi Zhang, Libin Liu
We leverage the power of the large-scale Contrastive-Language-Image-Pre-training (CLIP) model and present a novel CLIP-guided mechanism that extracts efficient style representations from multiple input modalities, such as a piece of text, an example motion clip, or a video.
no code implementations • 18 Feb 2023 • Jingzong Li, Yik Hong Cai, Libin Liu, Yu Mao, Chun Jason Xue, Hong Xu
3D object detection plays a pivotal role in many applications, most notably autonomous driving and robotics.
no code implementations • 12 Oct 2022 • Heyuan Yao, Zhenhua Song, Baoquan Chen, Libin Liu
Our framework can learn a rich and flexible latent representation of skills and a skill-conditioned generative control policy from a diverse set of unorganized motion sequences, which enables the generation of realistic human behaviors by sampling in the latent space and allows high-level control policies to reuse the learned skills to accomplish a variety of downstream tasks.
1 code implementation • ICCV 2023 • Wentao Zhu, Xiaoxuan Ma, Zhaoyang Liu, Libin Liu, Wayne Wu, Yizhou Wang
We present a unified perspective on tackling various human-centric video tasks by learning human motion representations from large-scale and heterogeneous data resources.
Ranked #1 on
Monocular 3D Human Pose Estimation
on Human3.6M
(using extra training data)
1 code implementation • 4 Oct 2022 • Tenglong Ao, Qingzhe Gao, Yuke Lou, Baoquan Chen, Libin Liu
We present a novel co-speech gesture synthesis method that achieves convincing results both on the rhythm and semantics.
Ranked #2 on
Gesture Generation
on TED Gesture Dataset
no code implementations • 25 Aug 2022 • Yiming Wang, Qingzhe Gao, Libin Liu, Lingjie Liu, Christian Theobalt, Baoquan Chen
The learned representation can be used to synthesize novel view images of an arbitrary person from a sparse set of cameras, and further animate them with the user's pose control.
1 code implementation • 10 Jun 2021 • Qingzhe Gao, Bin Wang, Libin Liu, Baoquan Chen
Co-part segmentation is an important problem in computer vision for its rich applications.
1 code implementation • 6 May 2021 • Peizhuo Li, Kfir Aberman, Rana Hanocka, Libin Liu, Olga Sorkine-Hornung, Baoquan Chen
Furthermore, we propose neural blend shapes--a set of corrective pose-dependent shapes which improve the deformation quality in the joint regions in order to address the notorious artifacts resulting from standard rigging and skinning.