Search Results for author: Elif Bozkurt

Found 2 papers, 1 papers with code

Personalized Speech-driven Expressive 3D Facial Animation Synthesis with Style Control

no code implementations25 Oct 2023 Elif Bozkurt

We present a personalized speech-driven expressive 3D facial animation synthesis framework that models identity specific facial motion as latent representations (called as styles), and synthesizes novel animations given a speech input with the target style for various emotion categories.

BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures Synthesis

2 code implementations10 Mar 2022 Haiyang Liu, Zihao Zhu, Naoya Iwamoto, Yichen Peng, Zhengqing Li, You Zhou, Elif Bozkurt, Bo Zheng

Achieving realistic, vivid, and human-like synthesized conversational gestures conditioned on multi-modal data is still an unsolved problem due to the lack of available datasets, models and standard evaluation metrics.

Gesture Generation Gesture Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.