Search Results for author: Lawrence K. Saul

Found 9 papers, 2 papers with code

An Ordering of Divergences for Variational Inference with Factorized Gaussian Approximations

1 code implementation20 Mar 2024 Charles C. Margossian, Loucas Pillaud-Vivien, Lawrence K. Saul

Our analysis covers the KL divergence, the R\'enyi divergences, and a score-based divergence that compares $\nabla\log p$ and $\nabla\log q$.

valid Variational Inference

Batch and match: black-box variational inference with a score-based divergence

no code implementations22 Feb 2024 Diana Cai, Chirag Modi, Loucas Pillaud-Vivien, Charles C. Margossian, Robert M. Gower, David M. Blei, Lawrence K. Saul

We analyze the convergence of BaM when the target distribution is Gaussian, and we prove that in the limit of infinite batch size the variational parameter updates converge exponentially quickly to the target mean and covariance.

Variational Inference

The Shrinkage-Delinkage Trade-off: An Analysis of Factorized Gaussian Approximations for Variational Inference

1 code implementation17 Feb 2023 Charles C. Margossian, Lawrence K. Saul

We study various manifestations of this trade-off, notably one where, as the dimension of the problem grows, the per-component entropy gap between $p$ and $q$ becomes vanishingly small even though $q$ underestimates every componentwise variance by a constant multiplicative factor.

Variational Inference

An online passive-aggressive algorithm for difference-of-squares classification

no code implementations NeurIPS 2021 Lawrence K. Saul

The model is also over-parameterized in the sense that different pairs of affine transformations can describe classifiers with the same decision boundary and confidence scores.

Classification Object Recognition

Topic Modeling of Hierarchical Corpora

no code implementations11 Sep 2014 Do-kyum Kim, Geoffrey M. Voelker, Lawrence K. Saul

We study the problem of topic modeling in corpora whose documents are organized in a multi-level hierarchy.

Computer Security

Kernel Methods for Deep Learning

no code implementations NeurIPS 2009 Youngmin Cho, Lawrence K. Saul

We introduce a new family of positive-definite kernel functions that mimic the computation in large, multilayer neural nets.

Cannot find the paper you are looking for? You can Submit a new open access paper.