Language Modeling with Generative AdversarialNetworks

Generative Adversarial Networks (GANs) have been promising in the field of image generation, however, they have been hard to train for language generation. GANs were originally designed to output differentiable values, so discrete language generation is challenging for them which causes high levels of instability in training GANs... (read more)

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
Layer Normalization
Normalization
WGAN-GP Loss
Loss Functions
Leaky ReLU
Activation Functions
Batch Normalization
Normalization
WGAN GP
Generative Adversarial Networks
Convolution
Convolutions
WGAN
Generative Adversarial Networks