1 code implementation • 20 Mar 2024 • Charles C. Margossian, Loucas Pillaud-Vivien, Lawrence K. Saul
Our analysis covers the KL divergence, the R\'enyi divergences, and a score-based divergence that compares $\nabla\log p$ and $\nabla\log q$.
no code implementations • 22 Feb 2024 • Diana Cai, Chirag Modi, Loucas Pillaud-Vivien, Charles C. Margossian, Robert M. Gower, David M. Blei, Lawrence K. Saul
We analyze the convergence of BaM when the target distribution is Gaussian, and we prove that in the limit of infinite batch size the variational parameter updates converge exponentially quickly to the target mean and covariance.
1 code implementation • 17 Feb 2023 • Charles C. Margossian, Lawrence K. Saul
We study various manifestations of this trade-off, notably one where, as the dimension of the problem grows, the per-component entropy gap between $p$ and $q$ becomes vanishingly small even though $q$ underestimates every componentwise variance by a constant multiplicative factor.
no code implementations • NeurIPS 2021 • Lawrence K. Saul
The model is also over-parameterized in the sense that different pairs of affine transformations can describe classifiers with the same decision boundary and confidence scores.
no code implementations • 11 Sep 2014 • Do-kyum Kim, Geoffrey M. Voelker, Lawrence K. Saul
We study the problem of topic modeling in corpora whose documents are organized in a multi-level hierarchy.
no code implementations • NeurIPS 2012 • Matthew Der, Lawrence K. Saul
We describe a latent variable model for supervised dimensionality reduction and distance metric learning.
no code implementations • NeurIPS 2011 • Vijay Mahadevan, Chi W. Wong, Jose C. Pereira, Tom Liu, Nuno Vasconcelos, Lawrence K. Saul
To perform this visualization, we augment MCU with an additional step for metric learning in the high dimensional voxel space.
no code implementations • NeurIPS 2010 • Diane Hu, Laurens Maaten, Youngmin Cho, Sorin Lerner, Lawrence K. Saul
When software developers modify one or more files in a large code base, they must also identify and update other related files.
no code implementations • NeurIPS 2009 • Youngmin Cho, Lawrence K. Saul
We introduce a new family of positive-definite kernel functions that mimic the computation in large, multilayer neural nets.