Search Results for author: Zili Yi

Found 13 papers, 6 papers with code

OSTAF: A One-Shot Tuning Method for Improved Attribute-Focused T2I Personalization

no code implementations17 Mar 2024 Ye Wang, Zili Yi, Rui Ma

Personalized text-to-image (T2I) models not only produce lifelike and varied visuals but also allow users to tailor the images to fit their personal taste.

Attribute

SemanticHuman-HD: High-Resolution Semantic Disentangled 3D Human Generation

no code implementations15 Mar 2024 Peng Zheng, Tao Liu, Zili Yi, Rui Ma

Notably, SemanticHuman-HD is also the first method to achieve 3D-aware image synthesis at $1024^2$ resolution, benefiting from our proposed 3D-aware super-resolution module.

3D-Aware Image Synthesis Disentanglement +1

ReGANIE: Rectifying GAN Inversion Errors for Accurate Real Image Editing

no code implementations31 Jan 2023 Bingchuan Li, Tianxiang Ma, Peng Zhang, Miao Hua, Wei Liu, Qian He, Zili Yi

Specifically, in Phase I, a W-space-oriented StyleGAN inversion network is trained and used to perform image inversion and editing, which assures the editability but sacrifices reconstruction quality.

Image Generation

XMP-Font: Self-Supervised Cross-Modality Pre-training for Few-Shot Font Generation

no code implementations CVPR 2022 Wei Liu, Fangyue Liu, Fei Ding, Qian He, Zili Yi

The cross-modality encoder is pre-trained in a self-supervised manner to allow effective capture of cross- and intra-modality correlations, which facilitates the content-style disentanglement and modeling style representations of all scales (stroke-level, component-level and character-level).

Disentanglement Font Generation

Region-Aware Face Swapping

no code implementations CVPR 2022 Chao Xu, Jiangning Zhang, Miao Hua, Qian He, Zili Yi, Yong liu

This paper presents a novel Region-Aware Face Swapping (RAFSwap) network to achieve identity-consistent harmonious high-resolution face generation in a local-global manner: \textbf{1)} Local Facial Region-Aware (FRA) branch augments local identity-relevant features by introducing the Transformer to effectively model misaligned cross-scale semantic interaction.

Face Generation Face Swapping +1

CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP

2 code implementations1 Mar 2022 ZiHao Wang, Wei Liu, Qian He, Xinglong Wu, Zili Yi

Once trained, the transformer can generate coherent image tokens based on the text embedding extracted from the text encoder of CLIP upon an input text.

Text-to-Image Generation

Spatial-Temporal Residual Aggregation for High Resolution Video Inpainting

no code implementations5 Nov 2021 Vishnu Sanjay Ramiya Srinivasan, Rui Ma, Qiang Tang, Zili Yi, Zhan Xu

Recent learning-based inpainting algorithms have achieved compelling results for completing missing regions after removing undesired objects in videos.

Video Inpainting Vocal Bursts Intensity Prediction

FaceEraser: Removing Facial Parts for Augmented Reality

1 code implementation22 Sep 2021 Miao Hua, Lijie Liu, Ziyang Cheng, Qian He, Bingchuan Li, Zili Yi

Whereas, this technique does not satisfy the requirements of facial parts removal, as it is hard to obtain ``ground-truth'' images with real ``blank'' faces.

Image Inpainting

DyStyle: Dynamic Neural Network for Multi-Attribute-Conditioned Style Editing

1 code implementation22 Sep 2021 Bingchuan Li, Shaofei Cai, Wei Liu, Peng Zhang, Qian He, Miao Hua, Zili Yi

To address these limitations, we design a Dynamic Style Manipulation Network (DyStyle) whose structure and parameters vary by input samples, to perform nonlinear and adaptive manipulation of latent codes for flexible and precise attribute control.

Attribute Contrastive Learning

Animating Through Warping: an Efficient Method for High-Quality Facial Expression Animation

no code implementations1 Aug 2020 Zili Yi, Qiang Tang, Vishnu Sanjay Ramiya Srinivasan, Zhan Xu

It only requires the generator to be trained on small images and can do inference on an image of any size.

4k

Contextual Residual Aggregation for Ultra High-Resolution Image Inpainting

6 code implementations CVPR 2020 Zili Yi, Qiang Tang, Shekoofeh Azizi, Daesik Jang, Zhan Xu

Since convolutional layers of the neural network only need to operate on low-resolution inputs and outputs, the cost of memory and computing power is thus well suppressed.

2k 8k +2

BSD-GAN: Branched Generative Adversarial Network for Scale-Disentangled Representation Learning and Image Synthesis

2 code implementations22 Mar 2018 Zili Yi, Zhiqin Chen, Hao Cai, Wendong Mao, Minglun Gong, Hao Zhang

The key feature of BSD-GAN is that it is trained in multiple branches, progressively covering both the breadth and depth of the network, as resolutions of the training images increase to reveal finer-scale features.

Generative Adversarial Network Image Generation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.