Large Scale Adversarial Representation Learning

Adversarially trained generative models (GANs) have recently achieved compelling image synthesis results. But despite early successes in using GANs for unsupervised representation learning, they have since been superseded by approaches based on self-supervision... (read more)

PDF Abstract NeurIPS 2019 PDF NeurIPS 2019 Abstract

Datasets


Results from the Paper


TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK RESULT BENCHMARK
Self-Supervised Image Classification ImageNet BigBiGAN (RevNet-50 ×4, BN+CReLU) Top 1 Accuracy 61.3% # 49
Top 5 Accuracy 81.9% # 23
Number of Params 86M # 21
Self-Supervised Image Classification ImageNet BigBiGAN (RevNet-50 ×4) Top 1 Accuracy 60.8% # 50
Top 5 Accuracy 81.4% # 24
Self-Supervised Image Classification ImageNet BigBiGAN (ResNet-50, BN+CReLU) Top 1 Accuracy 56.6% # 54
Top 5 Accuracy 78.6% # 25
Self-Supervised Image Classification ImageNet BigBiGAN (ResNet-50) Top 1 Accuracy 55.4% # 55
Top 5 Accuracy 77.4% # 27
Semi-Supervised Image Classification ImageNet - 10% labeled data BigBiGAN (RevNet-50 ×4, BN+CReLU) Top 5 Accuracy 78.8% # 37
Semi-Supervised Image Classification ImageNet - 1% labeled data BigBiGAN (RevNet-50 ×4, BN+CReLU) Top 5 Accuracy 55.2% # 24

Methods used in the Paper


METHOD TYPE
Reversible Residual Block
Skip Connection Blocks
Dense Connections
Feedforward Networks
Pointwise Convolution
Convolutions
CReLU
Activation Functions
RevNet
Convolutional Neural Networks
Softmax
Output Functions
Feedforward Network
Feedforward Networks
Conditional Batch Normalization
Normalization
TTUR
Optimization
GAN Hinge Loss
Loss Functions
Non-Local Operation
Image Feature Extractors
Non-Local Block
Image Model Blocks
Truncation Trick
Latent Variable Sampling
Linear Layer
Feedforward Networks
Softplus
Activation Functions
Dot-Product Attention
Attention Mechanisms
BigBiGAN
Generative Models
Projection Discriminator
Discriminators
Spectral Normalization
Normalization
Off-Diagonal Orthogonal Regularization
Regularization
Adam
Stochastic Optimization
Early Stopping
Regularization
SAGAN Self-Attention Module
Attention Modules
SAGAN
Generative Adversarial Networks
BigGAN
Generative Models
Average Pooling
Pooling Operations
Residual Connection
Skip Connections
ReLU
Activation Functions
1x1 Convolution
Convolutions
Batch Normalization
Normalization
Bottleneck Residual Block
Skip Connection Blocks
Global Average Pooling
Pooling Operations
Residual Block
Skip Connection Blocks
Kaiming Initialization
Initialization
Max Pooling
Pooling Operations
Convolution
Convolutions
ResNet
Convolutional Neural Networks