Search Results for author: Jen Ning Lim

Found 7 papers, 5 papers with code

Momentum Particle Maximum Likelihood

no code implementations12 Dec 2023 Jen Ning Lim, Juan Kuntz, Samuel Power, Adam M. Johansen

Maximum likelihood estimation (MLE) of latent variable models is often recast as an optimization problem over the extended space of parameters and probability distributions.

Energy Discrepancies: A Score-Independent Loss for Energy-Based Models

1 code implementation NeurIPS 2023 Tobias Schröder, Zijing Ou, Jen Ning Lim, Yingzhen Li, Sebastian J. Vollmer, Andrew B. Duncan

Energy-based models are a simple yet powerful class of probabilistic models, but their widespread adoption has been limited by the computational burden of training them.

Particle algorithms for maximum likelihood training of latent variable models

1 code implementation27 Apr 2022 Juan Kuntz, Jen Ning Lim, Adam M. Johansen

(Neal and Hinton, 1998) recast maximum likelihood estimation of any given latent variable model as the minimization of a free energy functional $F$, and the EM algorithm as coordinate descent applied to $F$.

Energy-Based Models for Functional Data using Path Measure Tilting

1 code implementation4 Feb 2022 Jen Ning Lim, Sebastian Vollmer, Lorenz Wolf, Andrew Duncan

Their ability to incorporate domain-specific choices and constraints into the structure of the model through composition make EBMs an appealing candidate for applications in physics, biology and computer vision and various other fields.

Kernel Stein Tests for Multiple Model Comparison

3 code implementations NeurIPS 2019 Jen Ning Lim, Makoto Yamada, Bernhard Schölkopf, Wittawat Jitkrittum

The first test, building on the post selection inference framework, provably controls the number of best models that are wrongly declared worse (false positive rate).

More Powerful Selective Kernel Tests for Feature Selection

1 code implementation14 Oct 2019 Jen Ning Lim, Makoto Yamada, Wittawat Jitkrittum, Yoshikazu Terada, Shigeyuki Matsui, Hidetoshi Shimodaira

An approach for addressing this is via conditioning on the selection procedure to account for how we have used the data to generate our hypotheses, and prevent information to be used again after selection.

feature selection Selection bias

Cannot find the paper you are looking for? You can Submit a new open access paper.