Search Results for author: Libin Liu

Found 12 papers, 5 papers with code

BAGS: Building Animatable Gaussian Splatting from a Monocular Video with Diffusion Priors

no code implementations18 Mar 2024 Tingyang Zhang, Qingzhe Gao, Weiyu Li, Libin Liu, Baoquan Chen

In this work, we propose a method to build animatable 3D Gaussian Splatting from monocular video with diffusion priors.

3D Reconstruction

A Spatial-Temporal Transformer based Framework For Human Pose Assessment And Correction in Education Scenarios

no code implementations1 Nov 2023 Wenyang Hu, Kai Liu, Libin Liu, Huiliang Shang

Human pose assessment and correction play a crucial role in applications across various fields, including computer vision, robotics, sports analysis, healthcare, and entertainment.

Pose Estimation

MoConVQ: Unified Physics-Based Motion Control via Scalable Discrete Representations

no code implementations16 Oct 2023 Heyuan Yao, Zhenhua Song, Yuyang Zhou, Tenglong Ao, Baoquan Chen, Libin Liu

In this work, we present MoConVQ, a novel unified framework for physics-based motion control leveraging scalable discrete representations.

In-Context Learning Model-based Reinforcement Learning

Robust Dancer: Long-term 3D Dance Synthesis Using Unpaired Data

1 code implementation29 Mar 2023 Bin Feng, Tenglong Ao, Zequn Liu, Wei Ju, Libin Liu, Ming Zhang

How to automatically synthesize natural-looking dance movements based on a piece of music is an incrementally popular yet challenging task.

Disentanglement

GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents

no code implementations26 Mar 2023 Tenglong Ao, Zeyi Zhang, Libin Liu

We leverage the power of the large-scale Contrastive-Language-Image-Pre-training (CLIP) model and present a novel CLIP-guided mechanism that extracts efficient style representations from multiple input modalities, such as a piece of text, an example motion clip, or a video.

Contrastive Learning Gesture Generation

MotionBERT: A Unified Perspective on Learning Human Motion Representations

1 code implementation ICCV 2023 Wentao Zhu, Xiaoxuan Ma, Zhaoyang Liu, Libin Liu, Wayne Wu, Yizhou Wang

We present a unified perspective on tackling various human-centric video tasks by learning human motion representations from large-scale and heterogeneous data resources.

 Ranked #1 on Monocular 3D Human Pose Estimation on Human3.6M (using extra training data)

3D Pose Estimation Action Recognition +3

ControlVAE: Model-Based Learning of Generative Controllers for Physics-Based Characters

no code implementations12 Oct 2022 Heyuan Yao, Zhenhua Song, Baoquan Chen, Libin Liu

Our framework can learn a rich and flexible latent representation of skills and a skill-conditioned generative control policy from a diverse set of unorganized motion sequences, which enables the generation of realistic human behaviors by sampling in the latent space and allows high-level control policies to reuse the learned skills to accomplish a variety of downstream tasks.

Neural Novel Actor: Learning a Generalized Animatable Neural Representation for Human Actors

no code implementations25 Aug 2022 Yiming Wang, Qingzhe Gao, Libin Liu, Lingjie Liu, Christian Theobalt, Baoquan Chen

The learned representation can be used to synthesize novel view images of an arbitrary person from a sparse set of cameras, and further animate them with the user's pose control.

Attribute

Unsupervised Co-part Segmentation through Assembly

1 code implementation10 Jun 2021 Qingzhe Gao, Bin Wang, Libin Liu, Baoquan Chen

Co-part segmentation is an important problem in computer vision for its rich applications.

Segmentation

Learning Skeletal Articulations with Neural Blend Shapes

1 code implementation6 May 2021 Peizhuo Li, Kfir Aberman, Rana Hanocka, Libin Liu, Olga Sorkine-Hornung, Baoquan Chen

Furthermore, we propose neural blend shapes--a set of corrective pose-dependent shapes which improve the deformation quality in the joint regions in order to address the notorious artifacts resulting from standard rigging and skinning.

Cannot find the paper you are looking for? You can Submit a new open access paper.