Understanding invariance via feedforward inversion of discriminatively trained classifiers

A discriminatively trained neural net classifier achieves optimal performance if all information about its input other than class membership has been discarded prior to the output layer. Surprisingly, past research has discovered that some extraneous visual detail remains in the output logits... (read more)

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
Adam
Stochastic Optimization
1x1 Convolution
Convolutions
Convolution
Convolutions
Dot-Product Attention
Attention Mechanisms
ReLU
Activation Functions
Residual Connection
Skip Connections
Dense Connections
Feedforward Networks
Feedforward Network
Feedforward Networks
Softmax
Output Functions
Batch Normalization
Normalization
GAN Hinge Loss
Loss Functions
Non-Local Operation
Image Feature Extractors
SAGAN Self-Attention Module
Attention Modules
SAGAN
Generative Adversarial Networks
TTUR
Optimization
Off-Diagonal Orthogonal Regularization
Regularization
Residual Block
Skip Connection Blocks
Non-Local Block
Image Model Blocks
Conditional Batch Normalization
Normalization
Truncation Trick
Latent Variable Sampling
Early Stopping
Regularization
Projection Discriminator
Discriminators
Spectral Normalization
Normalization
Linear Layer
Feedforward Networks
BigGAN
Generative Models