Search Results for author: Takuru Shimoyama

Found 2 papers, 1 papers with code

Domain-Adaptive Full-Face Gaze Estimation via Novel-View-Synthesis and Feature Disentanglement

no code implementations25 May 2023 Jiawei Qin, Takuru Shimoyama, Xucong Zhang, Yusuke Sugano

To bridge the inevitable gap between synthetic and real images, we further propose an unsupervised domain adaptation method suitable for synthetic full-face data.

3D Reconstruction Disentanglement +3

Learning-by-Novel-View-Synthesis for Full-Face Appearance-Based 3D Gaze Estimation

1 code implementation20 Jan 2022 Jiawei Qin, Takuru Shimoyama, Yusuke Sugano

Despite recent advances in appearance-based gaze estimation techniques, the need for training data that covers the target head pose and gaze distribution remains a crucial challenge for practical deployment.

3D Face Reconstruction Data Augmentation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.