Complex Momentum for Learning in Games

We generalize gradient descent with momentum for learning in differentiable games to have complex-valued momentum. We give theoretical motivation for our method by proving convergence on bilinear zero-sum games for simultaneous and alternating updates... (read more)

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
Dot-Product Attention
Attention Mechanisms
Dense Connections
Feedforward Networks
Softmax
Output Functions
SAGAN Self-Attention Module
Attention Modules
Feedforward Network
Feedforward Networks
Residual Connection
Skip Connections
Non-Local Operation
Image Feature Extractors
GAN Hinge Loss
Loss Functions
TTUR
Optimization
Conditional Batch Normalization
Normalization
SAGAN
Generative Adversarial Networks
1x1 Convolution
Convolutions
Truncation Trick
Latent Variable Sampling
Early Stopping
Regularization
Projection Discriminator
Discriminators
Convolution
Convolutions
ReLU
Activation Functions
Non-Local Block
Image Model Blocks
Batch Normalization
Normalization
Off-Diagonal Orthogonal Regularization
Regularization
Adam
Stochastic Optimization
Residual Block
Skip Connection Blocks
Linear Layer
Feedforward Networks
Spectral Normalization
Normalization
BigGAN
Generative Models