Search Results for author: Sergey Tulyakov

Found 23 papers, 11 papers with code

Motion Representations for Articulated Animation

2 code implementations CVPR 2021 Aliaksandr Siarohin, Oliver J. Woodford, Jian Ren, Menglei Chai, Sergey Tulyakov

To facilitate animation and prevent the leakage of the shape of the driving object, we disentangle shape and pose of objects in the region space.

Affine Transformation Video Reconstruction

In&Out : Diverse Image Outpainting via GAN Inversion

no code implementations1 Apr 2021 Yen-Chi Cheng, Chieh Hubert Lin, Hsin-Ying Lee, Jian Ren, Sergey Tulyakov, Ming-Hsuan Yang

Existing image outpainting methods pose the problem as a conditional image-to-image translation task, often generating repetitive structures and textures by replicating the content available in the input image.

GAN inversion Image Outpainting +2

SMIL: Multimodal Learning with Severely Missing Modality

1 code implementation9 Mar 2021 Mengmeng Ma, Jian Ren, Long Zhao, Sergey Tulyakov, Cathy Wu, Xi Peng

A common assumption in multimodal learning is the completeness of training data, i. e., full modalities are available in all training examples.

Meta-Learning

Teachers Do More Than Teach: Compressing Image-to-Image Models

1 code implementation CVPR 2021 Qing Jin, Jian Ren, Oliver J. Woodford, Jiazhuo Wang, Geng Yuan, Yanzhi Wang, Sergey Tulyakov

In this work, we aim to address these issues by introducing a teacher network that provides a search space in which efficient network architectures can be found, in addition to performing knowledge distillation.

Knowledge Distillation

MichiGAN: Multi-Input-Conditioned Hair Image Generation for Portrait Editing

1 code implementation30 Oct 2020 Zhentao Tan, Menglei Chai, Dongdong Chen, Jing Liao, Qi Chu, Lu Yuan, Sergey Tulyakov, Nenghai Yu

In this paper, we present MichiGAN (Multi-Input-Conditioned Hair Image GAN), a novel conditional image generation method for interactive portrait hair manipulation.

Conditional Image Generation

Interactive Video Stylization Using Few-Shot Patch-Based Training

no code implementations29 Apr 2020 Ondřej Texler, David Futschik, Michal Kučera, Ondřej Jamriška, Šárka Sochorová, Menglei Chai, Sergey Tulyakov, Daniel Sýkora

In this paper, we present a learning-based method to the keyframe-based video stylization that allows an artist to propagate the style from a few selected keyframes to the rest of the sequence.

Style Transfer Translation

Neural Hair Rendering

no code implementations ECCV 2020 Menglei Chai, Jian Ren, Sergey Tulyakov

Unlike existing supervised translation methods that require model-level similarity to preserve consistent structure representation for both real images and fake renderings, our method adopts an unsupervised solution to work on arbitrary hair models.

Translation

Human Motion Transfer from Poses in the Wild

no code implementations7 Apr 2020 Jian Ren, Menglei Chai, Sergey Tulyakov, Chen Fang, Xiaohui Shen, Jianchao Yang

In this paper, we tackle the problem of human motion transfer, where we synthesize novel motion video for a target person that imitates the movement from a reference video.

Translation

Motion-supervised Co-Part Segmentation

1 code implementation7 Apr 2020 Aliaksandr Siarohin, Subhankar Roy, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci, Nicu Sebe

To overcome this limitation, we propose a self-supervised deep learning method for co-part segmentation.

Task-Assisted Domain Adaptation with Anchor Tasks

no code implementations16 Aug 2019 Zhizhong Li, Linjie Luo, Sergey Tulyakov, Qieyun Dai, Derek Hoiem

Our key idea to improve domain adaptation is to introduce a separate anchor task (such as facial landmarks) whose annotations can be obtained at no cost or are already available on both synthetic and real datasets.

Depth Estimation Domain Adaptation +1

Transformable Bottleneck Networks

1 code implementation ICCV 2019 Kyle Olszewski, Sergey Tulyakov, Oliver Woodford, Hao Li, Linjie Luo

We propose a novel approach to performing fine-grained 3D manipulation of image content via a convolutional neural network, which we call the Transformable Bottleneck Network (TBN).

3D Reconstruction Novel View Synthesis

Train One Get One Free: Partially Supervised Neural Network for Bug Report Duplicate Detection and Clustering

no code implementations NAACL 2019 Lahari Poddar, Leonardo Neves, William Brendel, Luis Marujo, Sergey Tulyakov, Pradeep Karuturi

Leveraging the assumption that learning the topic of a bug is a sub-task for detecting duplicates, we design a loss function that can jointly perform both tasks but needs supervision for only duplicate classification, achieving topic clustering in an unsupervised fashion.

General Classification

3D Guided Fine-Grained Face Manipulation

no code implementations CVPR 2019 Zhenglin Geng, Chen Cao, Sergey Tulyakov

This is achieved by first fitting a 3D face model and then disentangling the face into a texture and a shape.

Face Model

Hybrid VAE: Improving Deep Generative Models using Partial Observations

no code implementations30 Nov 2017 Sergey Tulyakov, Andrew Fitzgibbon, Sebastian Nowozin

We show that such a combination is beneficial because the unlabeled data acts as a data-driven form of regularization, allowing generative models trained on few labeled samples to reach the performance of fully-supervised generative models trained on much larger datasets.

Self-Adaptive Matrix Completion for Heart Rate Estimation From Face Videos Under Realistic Conditions

no code implementations CVPR 2016 Sergey Tulyakov, Xavier Alameda-Pineda, Elisa Ricci, Lijun Yin, Jeffrey F. Cohn, Nicu Sebe

Recent studies in computer vision have shown that, while practically invisible to a human observer, skin color changes due to blood flow can be captured on face videos and, surprisingly, be used to estimate the heart rate (HR).

Heart rate estimation Matrix Completion

Regressing a 3D Face Shape From a Single Image

no code implementations ICCV 2015 Sergey Tulyakov, Nicu Sebe

To support the ability of our method to reliably reconstruct 3D shapes, we introduce a simple method for head pose estimation using a single image that reaches higher accuracy than the state of the art.

Head Pose Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.