Joint Stochastic Approximation and Its Application to Learning Discrete Latent Variable Models

Although with progress in introducing auxiliary amortized inference models, learning discrete latent variable models is still challenging. In this paper, we show that the annoying difficulty of obtaining reliable stochastic gradients for the inference model and the drawback of indirectly optimizing the target log-likelihood can be gracefully addressed in a new method based on stochastic approximation (SA) theory of the Robbins-Monro type... (read more)

Results in Papers With Code
(↓ scroll down to see all results)