StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery

Inspired by the ability of StyleGAN to generate highly realistic images in a variety of domains, much recent work has focused on understanding how to use the latent spaces of StyleGAN to manipulate generated and real images. However, discovering semantically meaningful latent manipulations typically involves painstaking human examination of the many degrees of freedom, or an annotated collection of images for each desired manipulation... (read more)

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
Leaky ReLU
Activation Functions
R1 Regularization
Regularization
Adaptive Instance Normalization
Normalization
Dense Connections
Feedforward Networks
Convolution
Convolutions
Feedforward Network
Feedforward Networks
StyleGAN
Generative Models