no code implementations • NeurIPS 2016 • Francisco R. Ruiz, Michalis Titsias Rc Aueb, David Blei
The reparameterization gradient has become a widely used method to obtain Monte Carlo gradients to optimize the variational objective.
no code implementations • NeurIPS 2016 • Michalis Titsias Rc Aueb
The softmax representation of probabilities for categorical variables plays a prominent role in modern machine learning with numerous applications in areas such as large scale classification, neural language modeling and recommendation systems.
no code implementations • NeurIPS 2015 • Michalis Titsias Rc Aueb, Miguel Lázaro-Gredilla
This algorithm divides the problem of estimating the stochastic gradients over multiple variational parameters into smaller sub-tasks so that each sub-task explores intelligently the most relevant part of the variational distribution.
no code implementations • NeurIPS 2014 • Michalis Titsias Rc Aueb, Christopher Yau
We introduce a novel sampling algorithm for Markov chain Monte Carlo-based Bayesian inference for factorial hidden Markov models.
no code implementations • NeurIPS 2013 • Michalis Titsias Rc Aueb, Miguel Lazaro-Gredilla
We introduce a novel variational method that allows to approximately integrate out kernel hyperparameters, such as length-scales, in Gaussian process regression.