Search Results for author: Koki Nagano

Found 22 papers, 4 papers with code

What You See is What You GAN: Rendering Every Pixel for High-Fidelity Geometry in 3D GANs

no code implementations4 Jan 2024 Alex Trevithick, Matthew Chan, Towaki Takikawa, Umar Iqbal, Shalini De Mello, Manmohan Chandraker, Ravi Ramamoorthi, Koki Nagano

3D-aware Generative Adversarial Networks (GANs) have shown remarkable progress in learning to generate multi-view-consistent images and 3D geometries of scenes from collections of 2D images via neural volume rendering.

Neural Rendering Super-Resolution

GAvatar: Animatable 3D Gaussian Avatars with Implicit Mesh Learning

no code implementations18 Dec 2023 Ye Yuan, Xueting Li, Yangyi Huang, Shalini De Mello, Koki Nagano, Jan Kautz, Umar Iqbal

Gaussian splatting has emerged as a powerful 3D representation that harnesses the advantages of both explicit (mesh) and implicit (NeRF) 3D representations.

A Unified Approach for Text- and Image-guided 4D Scene Generation

no code implementations28 Nov 2023 Yufeng Zheng, Xueting Li, Koki Nagano, Sifei Liu, Karsten Kreis, Otmar Hilliges, Shalini De Mello

Large-scale diffusion generative models are greatly simplifying image, video and 3D asset creation from user-provided text prompts and images.

Scene Generation

Synthetic Image Detection: Highlights from the IEEE Video and Image Processing Cup 2022 Student Competition

no code implementations21 Sep 2023 Davide Cozzolino, Koki Nagano, Lucas Thomaz, Angshul Majumdar, Luisa Verdoliva

The Video and Image Processing (VIP) Cup is a student competition that takes place each year at the IEEE International Conference on Image Processing.

Synthetic Image Detection

Generalizable One-shot Neural Head Avatar

no code implementations14 Jun 2023 Xueting Li, Shalini De Mello, Sifei Liu, Koki Nagano, Umar Iqbal, Jan Kautz

We present a method that reconstructs and animates a 3D head avatar from a single-view portrait image.

Super-Resolution

Avatar Fingerprinting for Authorized Use of Synthetic Talking-Head Videos

no code implementations5 May 2023 Ekta Prashnani, Koki Nagano, Shalini De Mello, David Luebke, Orazio Gallo

This allows us to link the synthetic video to the identity driving the expressions in the video, regardless of the facial appearance shown.

Single-Shot Implicit Morphable Faces with Consistent Texture Parameterization

no code implementations4 May 2023 Connor Z. Lin, Koki Nagano, Jan Kautz, Eric R. Chan, Umar Iqbal, Leonidas Guibas, Gordon Wetzstein, Sameh Khamis

To tackle this problem, we propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.

Face Model Face Reconstruction

Real-Time Radiance Fields for Single-Image Portrait View Synthesis

no code implementations3 May 2023 Alex Trevithick, Matthew Chan, Michael Stengel, Eric R. Chan, Chao Liu, Zhiding Yu, Sameh Khamis, Manmohan Chandraker, Ravi Ramamoorthi, Koki Nagano

We present a one-shot method to infer and render a photorealistic 3D representation from a single unposed image (e. g., face portrait) in real-time.

Data Augmentation Novel View Synthesis

RANA: Relightable Articulated Neural Avatars

no code implementations ICCV 2023 Umar Iqbal, Akin Caliskan, Koki Nagano, Sameh Khamis, Pavlo Molchanov, Jan Kautz

We propose RANA, a relightable and articulated neural avatar for the photorealistic synthesis of humans under arbitrary viewpoints, body poses, and lighting.

Disentanglement Image Generation

On the detection of synthetic images generated by diffusion models

1 code implementation1 Nov 2022 Riccardo Corvi, Davide Cozzolino, Giada Zingarini, Giovanni Poggi, Koki Nagano, Luisa Verdoliva

Over the past decade, there has been tremendous progress in creating synthetic media, mainly thanks to the development of powerful methods based on generative adversarial networks (GAN).

Image Compression

Learning to Relight Portrait Images via a Virtual Light Stage and Synthetic-to-Real Adaptation

no code implementations21 Sep 2022 Yu-Ying Yeh, Koki Nagano, Sameh Khamis, Jan Kautz, Ming-Yu Liu, Ting-Chun Wang

An effective approach is to supervise the training of deep neural networks with a high-fidelity dataset of desired input-output pairs, captured with a light stage.

RTMV: A Ray-Traced Multi-View Synthetic Dataset for Novel View Synthesis

no code implementations14 May 2022 Jonathan Tremblay, Moustafa Meshry, Alex Evans, Jan Kautz, Alexander Keller, Sameh Khamis, Thomas Müller, Charles Loop, Nathan Morrical, Koki Nagano, Towaki Takikawa, Stan Birchfield

We present a large-scale synthetic dataset for novel view synthesis consisting of ~300k images rendered from nearly 2000 complex scenes using high-quality ray tracing at high resolution (1600 x 1600 pixels).

Novel View Synthesis

DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance Fields for Articulated Avatars

no code implementations29 Mar 2022 Amit Raj, Umar Iqbal, Koki Nagano, Sameh Khamis, Pavlo Molchanov, James Hays, Jan Kautz

In this work, we present, DRaCoN, a framework for learning full-body volumetric avatars which exploits the advantages of both the 2D and 3D neural rendering techniques.

Neural Rendering

Fusing Global and Local Features for Generalized AI-Synthesized Image Detection

1 code implementation26 Mar 2022 Yan Ju, Shan Jia, Lipeng Ke, Hongfei Xue, Koki Nagano, Siwei Lyu

Specifically, we design a two-branch model to combine global spatial information from the whole image and local informative features from multiple patches selected by a novel patch selection module.

Frame Averaging for Equivariant Shape Space Learning

no code implementations CVPR 2022 Matan Atzmon, Koki Nagano, Sanja Fidler, Sameh Khamis, Yaron Lipman

A natural way to incorporate symmetries in shape space learning is to ask that the mapping to the shape space (encoder) and mapping from the shape space (decoder) are equivariant to the relevant symmetries.

Normalized Avatar Synthesis Using StyleGAN and Perceptual Refinement

no code implementations CVPR 2021 Huiwen Luo, Koki Nagano, Han-Wei Kung, Mclean Goldwhite, Qingguo Xu, Zejian Wang, Lingyu Wei, Liwen Hu, Hao Li

Cutting-edge 3D face reconstruction methods use non-linear morphable face models combined with GAN-based decoders to capture the likeness and details of a person but fail to produce neutral head models with unshaded albedo textures which is critical for creating relightable and animation-friendly avatars for integration in virtual environments.

3D Face Reconstruction Face Model

One-Shot Identity-Preserving Portrait Reenactment

no code implementations26 Apr 2020 Sitao Xiang, Yuming Gu, Pengda Xiang, Mingming He, Koki Nagano, Haiwei Chen, Hao Li

This is achieved by a novel landmark disentanglement network (LD-Net), which predicts personalized facial landmarks that combine the identity of the target with expressions and poses from a different subject.

Disentanglement Generative Adversarial Network

Photorealistic Facial Texture Inference Using Deep Neural Networks

1 code implementation CVPR 2017 Shunsuke Saito, Lingyu Wei, Liwen Hu, Koki Nagano, Hao Li

We present a data-driven inference method that can synthesize a photorealistic texture map of a complete 3D face model given a partial 2D view of a person in the wild.

Face Model

Cannot find the paper you are looking for? You can Submit a new open access paper.