Sequential Learning for Dirichlet Process Mixtures

Dirichlet process mixture model provides a flexible nonparametric framework for unsupervised learning. Monte Carlo based sampling methods always involve heavy computation efforts; conventional variational inference requires careful design of the variational distribution and the conditional expectation. In this work, we treat the DP mixture itself as the variational proposal, and view the given data as drawn samples of the unknown target distribution. We propose an evidence upper bound (EUBO) to act as the surrogate loss, and fit a DP mixture to the given data by minimizing the EUBO, which is equivalent to minimizing the KL-divergence between the target distribution and the DP mixture. We provide three advantages of the EUBO based DP mixture fitting and show how to build the black-box style sequential learning algorithm. We use the stochastic gradient descent (SGD) algorithm for optimization that leverages on the automatic differentiation tools. Simulation studies are provided to demonstrate the efficiency of our proposed methods.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here