Search Results for author: Yuki Endo

Found 9 papers, 6 papers with code

Makeup Prior Models for 3D Facial Makeup Estimation and Applications

no code implementations26 Mar 2024 Xingchao Yang, Takafumi Taketomi, Yuki Endo, Yoshihiro Kanamori

Although there is a trade-off between the two models, both are applicable to 3D facial makeup estimation and related applications.

Face Reconstruction

DiffBody: Diffusion-based Pose and Shape Editing of Human Images

1 code implementation5 Jan 2024 Yuta Okuyama, Yuki Endo, Yoshihiro Kanamori

Because this initial textured body model has artifacts due to occlusion and the inaccurate body shape, the rendered image undergoes a diffusion-based refinement, in which strong noise destroys body structure and identity whereas insufficient noise does not help.

Self-Supervised Learning

Masked-Attention Diffusion Guidance for Spatially Controlling Text-to-Image Generation

1 code implementation11 Aug 2023 Yuki Endo

To address this issue, we propose masked-attention guidance, which can generate images more faithful to semantic masks via indirect control of attention to each word and pixel by manipulating noise images fed to diffusion models.

text-guided-image-editing

StyleHumanCLIP: Text-guided Garment Manipulation for StyleGAN-Human

no code implementations26 May 2023 Takato Yoshikawa, Yuki Endo, Yoshihiro Kanamori

We propose a framework for text-guided full-body human image synthesis via an attention-based latent code mapper, which enables more disentangled control of StyleGAN than existing mappers.

Image Generation

User-Controllable Latent Transformer for StyleGAN Image Layout Editing

1 code implementation26 Aug 2022 Yuki Endo

In our framework, the user annotates a StyleGAN image with locations they want to move or not and specifies a movement direction by mouse dragging.

Optical Flow Estimation

Diversifying Semantic Image Synthesis and Editing via Class- and Layer-wise VAEs

1 code implementation25 Jun 2021 Yuki Endo, Yoshihiro Kanamori

To handle individual factors that determine object styles, we propose a class- and layer-wise extension to the variational autoencoder (VAE) framework that allows flexible control over each object class at the local to global levels by learning multiple latent spaces.

Image Generation Object

Few-shot Semantic Image Synthesis Using StyleGAN Prior

1 code implementation27 Mar 2021 Yuki Endo, Yoshihiro Kanamori

This paper tackles a challenging problem of generating photorealistic images from semantic layouts in few-shot scenarios where annotated training pairs are hardly available but pixel-wise annotation is quite costly.

Image Generation

Animating Landscape: Self-Supervised Learning of Decoupled Motion and Appearance for Single-Image Video Synthesis

1 code implementation16 Oct 2019 Yuki Endo, Yoshihiro Kanamori, Shigeru Kuriyama

Automatic generation of a high-quality video from a single image remains a challenging task despite the recent advances in deep generative models.

Self-Supervised Learning Video Prediction

Relighting Humans: Occlusion-Aware Inverse Rendering for Full-Body Human Images

no code implementations7 Aug 2019 Yoshihiro Kanamori, Yuki Endo

Based on supervised learning using convolutional neural networks (CNNs), we infer not only an albedo map, illumination but also a light transport map that encodes occlusion as nine SH coefficients per pixel.

Image Generation Inverse Rendering

Cannot find the paper you are looking for? You can Submit a new open access paper.