We present a new theoretical perspective of data noising in recurrent neural
network language models (Xie et al., 2017). We show that each variant of data
noising is an instance of Bayesian recurrent neural networks with a particular
variational distribution (i.e., a mixture of Gaussians whose weights depend on
statistics derived from the corpus such as the unigram distribution)...
this insight to propose a more principled method to apply at prediction time
and propose natural extensions to data noising under the variational framework. In particular, we propose variational smoothing with tied input and output
embedding matrices and an element-wise variational smoothing method. We
empirically verify our analysis on two benchmark language modeling datasets and
demonstrate performance improvements over existing data noising methods.