Search Results for author: Michail Christos Doukas

Found 8 papers, 3 papers with code

3DGazeNet: Generalizing Gaze Estimation with Weak-Supervision from Synthetic Views

3 code implementations6 Dec 2022 Evangelos Ververas, Polydefkis Gkagkos, Jiankang Deng, Michail Christos Doukas, Jia Guo, Stefanos Zafeiriou

To close the gap between image domains, we create a large-scale dataset of diverse faces with gaze pseudo-annotations, which we extract based on the 3D geometry of the scene, and design a multi-view supervision framework to balance their effect during training.

Domain Adaptation Gaze Estimation +1

Dynamic Neural Portraits

no code implementations25 Nov 2022 Michail Christos Doukas, Stylianos Ploumpis, Stefanos Zafeiriou

We present Dynamic Neural Portraits, a novel approach to the problem of full-head reenactment.

Image-to-Image Translation

Head2HeadFS: Video-based Head Reenactment with Few-shot Learning

no code implementations30 Mar 2021 Michail Christos Doukas, Mohammad Rami Koujan, Viktoriia Sharmanska, Stefanos Zafeiriou

Head reenactment is an even more challenging task, which aims at transferring not only the facial expression, but also the entire head pose from a source person to a target.

Few-Shot Learning Pose Transfer

HeadGAN: One-shot Neural Head Synthesis and Editing

no code implementations ICCV 2021 Michail Christos Doukas, Stefanos Zafeiriou, Viktoriia Sharmanska

Recent attempts to solve the problem of head reenactment using a single reference image have shown promising results.

Head2Head++: Deep Facial Attributes Re-Targeting

1 code implementation17 Jun 2020 Michail Christos Doukas, Mohammad Rami Koujan, Viktoriia Sharmanska, Anastasios Roussos

Facial video re-targeting is a challenging problem aiming to modify the facial attributes of a target subject in a seamless manner by a driving monocular sequence.

ReenactNet: Real-time Full Head Reenactment

no code implementations22 May 2020 Mohammad Rami Koujan, Michail Christos Doukas, Anastasios Roussos, Stefanos Zafeiriou

Video-to-video synthesis is a challenging problem aiming at learning a translation function between a sequence of semantic maps and a photo-realistic video depicting the characteristics of a driving video.

Translation Video-to-Video Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.