Paper

Variational Smoothing in Recurrent Neural Network Language Models

We present a new theoretical perspective of data noising in recurrent neural network language models (Xie et al., 2017). We show that each variant of data noising is an instance of Bayesian recurrent neural networks with a particular variational distribution (i.e., a mixture of Gaussians whose weights depend on statistics derived from the corpus such as the unigram distribution)... (read more)

Results in Papers With Code
(↓ scroll down to see all results)