no code implementations • 24 Nov 2021 • Nathanael Ackerman, Julian Asilis, Jieqi Di, Cameron Freer, Jean-Baptiste Tristan
We introduce definitions of computable PAC learning for binary classification over computable metric spaces.
no code implementations • ICLR Workshop EBM 2021 • Hao Wu, Babak Esmaeili, Michael Wick, Jean-Baptiste Tristan, Jan-Willem van de Meent
In this paper, we propose conjugate energy-based models (CEBMs), a new class of energy-based models that define a joint density over data and latent variables.
no code implementations • 22 Oct 2020 • Michael L. Wick, Kate Silverstein, Jean-Baptiste Tristan, Adam Pocock, Mark Johnson
Indeed, self-supervised language models trained on "positive" examples of English text generalize in desirable ways to many natural language tasks.
no code implementations • 14 Jul 2020 • Jean-Baptiste Tristan, Joseph Tassarotti, Koundinya Vajjha, Michael L. Wick, Anindya Banerjee
Proof assistants can be used to formally verify machine learning systems by constructing machine checked proofs of correctness that rule out such bugs.
no code implementations • NeurIPS 2019 • Michael Wick, Swetasudha Panda, Jean-Baptiste Tristan
The prevailing wisdom is that a model's fairness and its accuracy are in tension with one another.
no code implementations • 11 Nov 2019 • Alican Bozkurt, Babak Esmaeili, Jean-Baptiste Tristan, Dana H. Brooks, Jennifer G. Dy, Jan-Willem van de Meent
Variational autoencoders optimize an objective that combines a reconstruction loss (the distortion) and a KL term (the rate).
no code implementations • 1 Nov 2019 • Joseph Tassarotti, Koundinya Vajjha, Anindya Banerjee, Jean-Baptiste Tristan
We present a formal proof in Lean of probably approximately correct (PAC) learnability of the concept class of decision stumps.
no code implementations • AKBC 2019 • Michael Wick, Swetasudha Panda, Joseph Tassarotti, Jean-Baptiste Tristan
In this case, we need the representation to be a homomorphism so that the hash of unions and differences of sets can be computed directly from the hashes of operands.
no code implementations • 2 Oct 2018 • Joseph Tassarotti, Jean-Baptiste Tristan, Michael Wick
We examine a related problem in which the parameters of a Bayesian model are very large and expensive to store in memory, and propose more compact representations of parameter values that can be used during inference.
no code implementations • 26 Jul 2017 • Jay Yoon Lee, Sanket Vaibhav Mehta, Michael Wick, Jean-Baptiste Tristan, Jaime Carbonell
Practitioners apply neural networks to increasingly complex problems in natural language processing, such as syntactic parsing and semantic role labeling that have rich output structures.
no code implementations • NeurIPS 2014 • Jean-Baptiste Tristan, Daniel Huang, Joseph Tassarotti, Adam C. Pocock, Stephen Green, Guy L. Steele
We show that the compiler can generate data-parallel inference code scalable to thousands of GPU cores by making use of the conditional independence relationships in the Bayesian network.
no code implementations • 12 Dec 2013 • Jean-Baptiste Tristan, Daniel Huang, Joseph Tassarotti, Adam Pocock, Stephen J. Green, Guy L. Steele Jr
In this paper, we present a probabilistic programming language and compiler for Bayesian networks designed to make effective use of data-parallel architectures such as GPUs.