Search Results for author: Hyeongwoo Kim

Found 15 papers, 0 papers with code

Deep Video Portraits

no code implementations29 May 2018 Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Nießner, Patrick Pérez, Christian Richardt, Michael Zollhöfer, Christian Theobalt

In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network -- thus taking full control of the target.

Face Model

InverseFaceNet: Deep Monocular Inverse Face Rendering

no code implementations CVPR 2018 Hyeongwoo Kim, Michael Zollhöfer, Ayush Tewari, Justus Thies, Christian Richardt, Christian Theobalt

In contrast, we propose to recover high-quality facial pose, shape, expression, reflectance and illumination using a deep neural network that is trained using a large, synthetically created training corpus.

Face Reconstruction Inverse Rendering

Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz

no code implementations CVPR 2018 Ayush Tewari, Michael Zollhöfer, Pablo Garrido, Florian Bernard, Hyeongwoo Kim, Patrick Pérez, Christian Theobalt

To alleviate this problem, we present the first approach that jointly learns 1) a regressor for face shape, expression, reflectance and illumination on the basis of 2) a concurrently learned parametric face model.

Face Model Monocular Reconstruction

MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction

no code implementations ICCV 2017 Ayush Tewari, Michael Zollhöfer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick Pérez, Christian Theobalt

In this work we propose a novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a 3D human face from a single in-the-wild color image.

Face Reconstruction Monocular Reconstruction

Video Depth-From-Defocus

no code implementations12 Oct 2016 Hyeongwoo Kim, Christian Richardt, Christian Theobalt

Many compelling video post-processing effects, in particular aesthetic focus editing and refocusing effects, are feasible if per-frame depth information is available.

Partial Sum Minimization of Singular Values in Robust PCA: Algorithm and Applications

no code implementations4 Mar 2015 Tae-Hyun Oh, Yu-Wing Tai, Jean-Charles Bazin, Hyeongwoo Kim, In So Kweon

Robust Principal Component Analysis (RPCA) via rank minimization is a powerful tool for recovering underlying low-rank structure of clean data corrupted with sparse noise/outliers.

Edge Detection

Neural Rendering and Reenactment of Human Actor Videos

no code implementations11 Sep 2018 Lingjie Liu, Weipeng Xu, Michael Zollhoefer, Hyeongwoo Kim, Florian Bernard, Marc Habermann, Wenping Wang, Christian Theobalt

In contrast to conventional human character rendering, we do not require the availability of a production-quality photo-realistic 3D model of the human, but instead rely on a video sequence in conjunction with a (medium-quality) controllable 3D template model of the person.

Generative Adversarial Network Image Generation +1

Specular Reflection Separation Using Dark Channel Prior

no code implementations CVPR 2013 Hyeongwoo Kim, Hailin Jin, Sunil Hadap, In-So Kweon

Our method is based on a novel observation that for most natural images the dark channel can provide an approximate specular-free image.

EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment

no code implementations26 May 2019 Mohamed Elgharib, Mallikarjun BR, Ayush Tewari, Hyeongwoo Kim, Wentao Liu, Hans-Peter Seidel, Christian Theobalt

Our lightweight setup allows operations in uncontrolled environments, and lends itself to telepresence applications such as video-conferencing from dynamic environments.

Neural Style-Preserving Visual Dubbing

no code implementations5 Sep 2019 Hyeongwoo Kim, Mohamed Elgharib, Michael Zollhöfer, Hans-Peter Seidel, Thabo Beeler, Christian Richardt, Christian Theobalt

We present a style-preserving visual dubbing approach from single video inputs, which maintains the signature style of target actors when modifying facial expressions, including mouth motions, to match foreign languages.

Generative Adversarial Network

Neural Human Video Rendering by Learning Dynamic Textures and Rendering-to-Video Translation

no code implementations14 Jan 2020 Lingjie Liu, Weipeng Xu, Marc Habermann, Michael Zollhoefer, Florian Bernard, Hyeongwoo Kim, Wenping Wang, Christian Theobalt

In this paper, we propose a novel human video synthesis method that approaches these limiting factors by explicitly disentangling the learning of time-coherent fine-scale details from the embedding of the human in 2D screen space.

Image-to-Image Translation Novel View Synthesis +1

DeepBioisostere: Discovering Bioisosteres with Deep Learning for a Fine Control of Multiple Molecular Properties

no code implementations5 Mar 2024 Hyeongwoo Kim, Seokhyun Moon, Wonho Zhung, Jaechang Lim, Woo Youn Kim

Our model's innovation lies in its capacity to design a bioisosteric replacement reflecting the compatibility with the surroundings of the modification site, facilitating the control of sophisticated properties like drug-likeness.

Cannot find the paper you are looking for? You can Submit a new open access paper.