Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly

28 Feb 2021 Tianlong Chen Yu Cheng Zhe Gan Jingjing Liu Zhangyang Wang

Training generative adversarial networks (GANs) with limited data generally results in deteriorated performance and collapsed models. To conquer this challenge, we are inspired by the latest observation of Kalibhat et al. (2020); Chen et al.(2021d), that one can discover independently trainable and highly sparse subnetworks (a.k.a., lottery tickets) from GANs... (read more)

PDF Abstract

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
Softmax
Output Functions
Dot-Product Attention
Attention Mechanisms
SAGAN Self-Attention Module
Attention Modules
Adam
Stochastic Optimization
Dense Connections
Feedforward Networks
Feedforward Network
Feedforward Networks
Non-Local Operation
Image Feature Extractors
1x1 Convolution
Convolutions
SAGAN
Generative Adversarial Networks
Batch Normalization
Normalization
Convolution
Convolutions
ReLU
Activation Functions
Residual Connection
Skip Connections
Truncation Trick
Latent Variable Sampling
Non-Local Block
Image Model Blocks
Off-Diagonal Orthogonal Regularization
Regularization
GAN Hinge Loss
Loss Functions
TTUR
Optimization
Early Stopping
Regularization
Residual Block
Skip Connection Blocks
Conditional Batch Normalization
Normalization
Linear Layer
Feedforward Networks
Projection Discriminator
Discriminators
Spectral Normalization
Normalization
BigGAN
Generative Models