no code implementations • 12 Dec 2023 • Jen Ning Lim, Juan Kuntz, Samuel Power, Adam M. Johansen
Maximum likelihood estimation (MLE) of latent variable models is often recast as an optimization problem over the extended space of parameters and probability distributions.
1 code implementation • NeurIPS 2023 • Tobias Schröder, Zijing Ou, Jen Ning Lim, Yingzhen Li, Sebastian J. Vollmer, Andrew B. Duncan
Energy-based models are a simple yet powerful class of probabilistic models, but their widespread adoption has been limited by the computational burden of training them.
1 code implementation • 27 Apr 2022 • Juan Kuntz, Jen Ning Lim, Adam M. Johansen
(Neal and Hinton, 1998) recast maximum likelihood estimation of any given latent variable model as the minimization of a free energy functional $F$, and the EM algorithm as coordinate descent applied to $F$.
no code implementations • 2 Mar 2022 • Oscar Giles, Kasra Hosseini, Grigorios Mingas, Oliver Strickson, Louise Bowler, Camila Rangel Smith, Harrison Wilde, Jen Ning Lim, Bilal Mateen, Kasun Amarasinghe, Rayid Ghani, Alison Heppenstall, Nik Lomax, Nick Malleson, Martin O'Reilly, Sebastian Vollmerteke
Synthetic datasets are often presented as a silver-bullet solution to the problem of privacy-preserving data publishing.
1 code implementation • 4 Feb 2022 • Jen Ning Lim, Sebastian Vollmer, Lorenz Wolf, Andrew Duncan
Their ability to incorporate domain-specific choices and constraints into the structure of the model through composition make EBMs an appealing candidate for applications in physics, biology and computer vision and various other fields.
3 code implementations • NeurIPS 2019 • Jen Ning Lim, Makoto Yamada, Bernhard Schölkopf, Wittawat Jitkrittum
The first test, building on the post selection inference framework, provably controls the number of best models that are wrongly declared worse (false positive rate).
1 code implementation • 14 Oct 2019 • Jen Ning Lim, Makoto Yamada, Wittawat Jitkrittum, Yoshikazu Terada, Shigeyuki Matsui, Hidetoshi Shimodaira
An approach for addressing this is via conditioning on the selection procedure to account for how we have used the data to generate our hypotheses, and prevent information to be used again after selection.