no code implementations • 30 Jun 2021 • Ghassen Jerfel, Serena Wang, Clara Fannjiang, Katherine A. Heller, Yian Ma, Michael I. Jordan
We thus propose a novel combination of optimization and sampling techniques for approximate Bayesian inference by constructing an IS proposal distribution through the minimization of a forward KL (FKL) divergence.
no code implementations • NeurIPS 2017 • Qi Wei, Kai Fan, Lawrence Carin, Katherine A. Heller
For matrix inversion in the second sub-problem, we learn a convolutional neural network to approximate the matrix inversion, i. e., the inverse mapping is learned by feeding the input through the learned forward network.
no code implementations • 2 Dec 2016 • Elizabeth C Lorenzi, Zhifei Sun, Erich Huang, Ricardo Henao, Katherine A. Heller
We aim to create a framework for transfer learning using latent factor models to learn the dependence structure between a larger source dataset and a target dataset.
no code implementations • NeurIPS 2015 • Kai Fan, Ziteng Wang, Jeff Beck, James Kwok, Katherine A. Heller
We propose a second-order (Hessian or Hessian-free) based optimization method for variational inference inspired by Gaussian backpropagation, and argue that quasi-Newton optimization can be developed as well.
2 code implementations • NeurIPS 2015 • Xiangyu Wang, Fangjian Guo, Katherine A. Heller, David B. Dunson
The new algorithm applies random partition trees to combine the subset posterior draws, which is distribution-free, easy to resample from and can adapt to multiple scales.
no code implementations • NeurIPS 2012 • Charles Blundell, Jeff Beck, Katherine A. Heller
We present a Bayesian nonparametric model that discovers implicit social structure from interaction time-series data.
no code implementations • NeurIPS 2012 • Jeff Beck, Alexandre Pouget, Katherine A. Heller
This ability requires a neural code that represents probability distributions and neural circuits that are capable of implementing the operations of probabilistic inference.
no code implementations • NeurIPS 2011 • Joshua T. Abbott, Katherine A. Heller, Zoubin Ghahramani, Thomas L. Griffiths
How do people determine which elements of a set are most representative of that set?
no code implementations • NeurIPS 2009 • Adam Sanborn, Nick Chater, Katherine A. Heller
Specifically, we present a rational model that does not assume dimensions, but learns the same type of dimensional generalizations that people display.
no code implementations • NeurIPS 2008 • Shakir Mohamed, Zoubin Ghahramani, Katherine A. Heller
Principal Components Analysis (PCA) has become established as one of the key tools for dimensionality reduction when dealing with real valued data.