1 code implementation • 15 May 2023 • Dafei Qin, Jun Saito, Noam Aigerman, Thibault Groueix, Taku Komura
We propose an end-to-end deep-learning approach for automatic rigging and retargeting of 3D models of human faces in the wild.
1 code implementation • 28 Jul 2022 • Zhouyingcheng Liao, Jimei Yang, Jun Saito, Gerard Pons-Moll, Yang Zhou
We present the first method that automatically transfers poses between stylized 3D characters without skeletal rigging.
no code implementations • CVPR 2022 • Yang Zhou, Jimei Yang, DIngzeyu Li, Jun Saito, Deepali Aneja, Evangelos Kalogerakis
We present a method that reenacts a high-quality video with gestures matching a target speech audio.
no code implementations • 4 Jun 2022 • Chengan He, Jun Saito, James Zachary, Holly Rushmeier, Yi Zhou
We present an implicit neural representation to learn the spatio-temporal space of kinematic motions.
1 code implementation • 5 May 2022 • Noam Aigerman, Kunal Gupta, Vladimir G. Kim, Siddhartha Chaudhuri, Jun Saito, Thibault Groueix
This paper introduces a framework designed to accurately predict piecewise linear mappings of arbitrary meshes via a neural network, enabling training and evaluating over heterogeneous collections of meshes that do not share a triangulation, as well as producing highly detail-preserving maps whose accuracy exceeds current state of the art.
1 code implementation • CVPR 2022 • Yingruo Fan, Zhaojiang Lin, Jun Saito, Wenping Wang, Taku Komura
Speech-driven 3D facial animation is challenging due to the complex geometry of human faces and the limited availability of 3D audio-visual data.
Ranked #1 on 3D Face Animation on VOCASET
no code implementations • 4 Dec 2021 • Yingruo Fan, Zhaojiang Lin, Jun Saito, Wenping Wang, Taku Komura
The existing datasets are collected to cover as many different phonemes as possible instead of sentences, thus limiting the capability of the audio-based model to learn more diverse contexts.
no code implementations • ICCV 2021 • Ruben Villegas, Duygu Ceylan, Aaron Hertzmann, Jimei Yang, Jun Saito
Self-contacts, such as when hands touch each other or the torso or the head, are important attributes of human body language and dynamics, yet existing methods do not model or preserve these contacts.
no code implementations • ICCV 2021 • Mohamed Hassan, Duygu Ceylan, Ruben Villegas, Jun Saito, Jimei Yang, Yi Zhou, Michael Black
A long-standing goal in computer vision is to capture, model, and realistically synthesize human behavior.
no code implementations • 4 Oct 2019 • Omid Poursaeed, Vladimir G. Kim, Eli Shechtman, Jun Saito, Serge Belongie
We capture these subtle changes by applying an image translation network to refine the mesh rendering, providing an end-to-end model to generate new animations of a character with high visual quality.
no code implementations • IJCNLP 2019 • Jun Saito, Yugo Murawaki, Sadao Kurohashi
Recognizing affective events that trigger positive or negative sentiment has a wide range of natural language processing applications but remains a challenging problem mainly because the polarity of an event is not necessarily predictable from its constituent words.