Search Results for author: Hyeongwoo Kim

Found 18 papers, 0 papers with code

Discrete Diffusion Schrödinger Bridge Matching for Graph Transformation

no code implementations2 Oct 2024 Jun Hyeong Kim, SeongHwan Kim, Seokhyun Moon, Hyeongwoo Kim, Jeheon Woo, Woo Youn Kim

Our approach extends Iterative Markovian Fitting to discrete domains, and we have proved its convergence to the SB.

GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars from Coarse-to-fine Representations

no code implementations18 Sep 2024 Kartik Teotia, Hyeongwoo Kim, Pablo Garrido, Marc Habermann, Mohamed Elgharib, Christian Theobalt

Real-time rendering of human head avatars is a cornerstone of many computer graphics applications, such as augmented reality, video games, and films, to name a few.

Novel View Synthesis

PAV: Personalized Head Avatar from Unstructured Video Collection

no code implementations22 Jul 2024 Akin Caliskan, Berkay Kicanaoglu, Hyeongwoo Kim

PAV introduces a method that learns a dynamic deformable neural radiance field (NeRF), in particular from a collection of monocular talking face videos of the same character under various appearance and shape changes.

DeepBioisostere: Discovering Bioisosteres with Deep Learning for a Fine Control of Multiple Molecular Properties

no code implementations5 Mar 2024 Hyeongwoo Kim, Seokhyun Moon, Wonho Zhung, Jaechang Lim, Woo Youn Kim

Our model's innovation lies in its capacity to design a bioisosteric replacement reflecting the compatibility with the surroundings of the modification site, facilitating the control of sophisticated properties like drug-likeness.

Neural Human Video Rendering by Learning Dynamic Textures and Rendering-to-Video Translation

no code implementations14 Jan 2020 Lingjie Liu, Weipeng Xu, Marc Habermann, Michael Zollhoefer, Florian Bernard, Hyeongwoo Kim, Wenping Wang, Christian Theobalt

In this paper, we propose a novel human video synthesis method that approaches these limiting factors by explicitly disentangling the learning of time-coherent fine-scale details from the embedding of the human in 2D screen space.

Image-to-Image Translation Novel View Synthesis +1

Neural Style-Preserving Visual Dubbing

no code implementations5 Sep 2019 Hyeongwoo Kim, Mohamed Elgharib, Michael Zollhöfer, Hans-Peter Seidel, Thabo Beeler, Christian Richardt, Christian Theobalt

We present a style-preserving visual dubbing approach from single video inputs, which maintains the signature style of target actors when modifying facial expressions, including mouth motions, to match foreign languages.

Generative Adversarial Network

EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment

no code implementations26 May 2019 Mohamed Elgharib, Mallikarjun BR, Ayush Tewari, Hyeongwoo Kim, Wentao Liu, Hans-Peter Seidel, Christian Theobalt

Our lightweight setup allows operations in uncontrolled environments, and lends itself to telepresence applications such as video-conferencing from dynamic environments.

Neural Rendering and Reenactment of Human Actor Videos

no code implementations11 Sep 2018 Lingjie Liu, Weipeng Xu, Michael Zollhoefer, Hyeongwoo Kim, Florian Bernard, Marc Habermann, Wenping Wang, Christian Theobalt

In contrast to conventional human character rendering, we do not require the availability of a production-quality photo-realistic 3D model of the human, but instead rely on a video sequence in conjunction with a (medium-quality) controllable 3D template model of the person.

Generative Adversarial Network Image Generation +1

Deep Video Portraits

no code implementations29 May 2018 Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Nießner, Patrick Pérez, Christian Richardt, Michael Zollhöfer, Christian Theobalt

In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network -- thus taking full control of the target.

Face Model

Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz

no code implementations CVPR 2018 Ayush Tewari, Michael Zollhöfer, Pablo Garrido, Florian Bernard, Hyeongwoo Kim, Patrick Pérez, Christian Theobalt

To alleviate this problem, we present the first approach that jointly learns 1) a regressor for face shape, expression, reflectance and illumination on the basis of 2) a concurrently learned parametric face model.

Diversity Face Model +1

InverseFaceNet: Deep Monocular Inverse Face Rendering

no code implementations CVPR 2018 Hyeongwoo Kim, Michael Zollhöfer, Ayush Tewari, Justus Thies, Christian Richardt, Christian Theobalt

In contrast, we propose to recover high-quality facial pose, shape, expression, reflectance and illumination using a deep neural network that is trained using a large, synthetically created training corpus.

Face Reconstruction Inverse Rendering

MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction

no code implementations ICCV 2017 Ayush Tewari, Michael Zollhöfer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick Pérez, Christian Theobalt

In this work we propose a novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a 3D human face from a single in-the-wild color image.

Decoder Face Reconstruction +1

Video Depth-From-Defocus

no code implementations12 Oct 2016 Hyeongwoo Kim, Christian Richardt, Christian Theobalt

Many compelling video post-processing effects, in particular aesthetic focus editing and refocusing effects, are feasible if per-frame depth information is available.

Partial Sum Minimization of Singular Values in Robust PCA: Algorithm and Applications

no code implementations4 Mar 2015 Tae-Hyun Oh, Yu-Wing Tai, Jean-Charles Bazin, Hyeongwoo Kim, In So Kweon

Robust Principal Component Analysis (RPCA) via rank minimization is a powerful tool for recovering underlying low-rank structure of clean data corrupted with sparse noise/outliers.

Edge Detection

Specular Reflection Separation Using Dark Channel Prior

no code implementations CVPR 2013 Hyeongwoo Kim, Hailin Jin, Sunil Hadap, In-So Kweon

Our method is based on a novel observation that for most natural images the dark channel can provide an approximate specular-free image.

Cannot find the paper you are looking for? You can Submit a new open access paper.