Search Results for author: Yingruo Fan

Found 4 papers, 2 papers with code

BodyFormer: Semantics-guided 3D Body Gesture Synthesis with Transformer

no code implementations7 Sep 2023 Kunkun Pang, Dafei Qin, Yingruo Fan, Julian Habekost, Takaaki Shiratori, Junichi Yamagishi, Taku Komura

Learning the mapping between speech and 3D full-body gestures is difficult due to the stochastic nature of the problem and the lack of a rich cross-modal dataset that is needed for training.

FaceFormer: Speech-Driven 3D Facial Animation with Transformers

1 code implementation CVPR 2022 Yingruo Fan, Zhaojiang Lin, Jun Saito, Wenping Wang, Taku Komura

Speech-driven 3D facial animation is challenging due to the complex geometry of human faces and the limited availability of 3D audio-visual data.

3D Face Animation

Joint Audio-Text Model for Expressive Speech-Driven 3D Facial Animation

no code implementations4 Dec 2021 Yingruo Fan, Zhaojiang Lin, Jun Saito, Wenping Wang, Taku Komura

The existing datasets are collected to cover as many different phonemes as possible instead of sentences, thus limiting the capability of the audio-based model to learn more diverse contexts.

Language Modelling

Facial Action Unit Intensity Estimation via Semantic Correspondence Learning with Dynamic Graph Convolution

1 code implementation20 Apr 2020 Yingruo Fan, Jacqueline C. K. Lam, Victor O. K. Li

The intensity estimation of facial action units (AUs) is challenging due to subtle changes in the person's facial appearance.

Semantic correspondence

Cannot find the paper you are looking for? You can Submit a new open access paper.