Search Results for author: Yoshihiro Kanamori

Found 8 papers, 4 papers with code

Makeup Prior Models for 3D Facial Makeup Estimation and Applications

no code implementations26 Mar 2024 Xingchao Yang, Takafumi Taketomi, Yuki Endo, Yoshihiro Kanamori

Although there is a trade-off between the two models, both are applicable to 3D facial makeup estimation and related applications.

Face Reconstruction

DiffBody: Diffusion-based Pose and Shape Editing of Human Images

1 code implementation5 Jan 2024 Yuta Okuyama, Yuki Endo, Yoshihiro Kanamori

Because this initial textured body model has artifacts due to occlusion and the inaccurate body shape, the rendered image undergoes a diffusion-based refinement, in which strong noise destroys body structure and identity whereas insufficient noise does not help.

Self-Supervised Learning

StyleHumanCLIP: Text-guided Garment Manipulation for StyleGAN-Human

no code implementations26 May 2023 Takato Yoshikawa, Yuki Endo, Yoshihiro Kanamori

We propose a framework for text-guided full-body human image synthesis via an attention-based latent code mapper, which enables more disentangled control of StyleGAN than existing mappers.

Image Generation

Makeup Extraction of 3D Representation via Illumination-Aware Image Decomposition

no code implementations26 Feb 2023 Xingchao Yang, Takafumi Taketomi, Yoshihiro Kanamori

The extracted makeup is well-aligned in the UV space, from which we build a large-scale makeup dataset and a parametric makeup model for 3D faces.

Inverse Rendering

Diversifying Semantic Image Synthesis and Editing via Class- and Layer-wise VAEs

1 code implementation25 Jun 2021 Yuki Endo, Yoshihiro Kanamori

To handle individual factors that determine object styles, we propose a class- and layer-wise extension to the variational autoencoder (VAE) framework that allows flexible control over each object class at the local to global levels by learning multiple latent spaces.

Image Generation Object

Few-shot Semantic Image Synthesis Using StyleGAN Prior

1 code implementation27 Mar 2021 Yuki Endo, Yoshihiro Kanamori

This paper tackles a challenging problem of generating photorealistic images from semantic layouts in few-shot scenarios where annotated training pairs are hardly available but pixel-wise annotation is quite costly.

Image Generation

Animating Landscape: Self-Supervised Learning of Decoupled Motion and Appearance for Single-Image Video Synthesis

1 code implementation16 Oct 2019 Yuki Endo, Yoshihiro Kanamori, Shigeru Kuriyama

Automatic generation of a high-quality video from a single image remains a challenging task despite the recent advances in deep generative models.

Self-Supervised Learning Video Prediction

Relighting Humans: Occlusion-Aware Inverse Rendering for Full-Body Human Images

no code implementations7 Aug 2019 Yoshihiro Kanamori, Yuki Endo

Based on supervised learning using convolutional neural networks (CNNs), we infer not only an albedo map, illumination but also a light transport map that encodes occlusion as nine SH coefficients per pixel.

Image Generation Inverse Rendering

Cannot find the paper you are looking for? You can Submit a new open access paper.