no code implementations • 20 Sep 2023 • Daniel Kharitonov, Ryan Turner
Generative AI workflows heavily rely on data-centric tasks - such as filtering samples by annotation fields, vector distances, or scores produced by custom classifiers.
1 code implementation • 20 Apr 2021 • Ryan Turner, David Eriksson, Michael McCourt, Juha Kiili, Eero Laaksonen, Zhen Xu, Isabelle Guyon
It was based on tuning (validation set) performance of standard machine learning models on real datasets.
2 code implementations • NeurIPS 2019 • David Eriksson, Michael Pearce, Jacob R. Gardner, Ryan Turner, Matthias Poloczek
This motivates the design of a local probabilistic approach for global optimization of large-scale high-dimensional problems.
4 code implementations • 28 Nov 2018 • Ryan Turner, Jane Hung, Eric Frank, Yunus Saatci, Jason Yosinski
We introduce the Metropolis-Hastings generative adversarial network (MH-GAN), which combines aspects of Markov chain Monte Carlo and GANs.
no code implementations • 16 Dec 2017 • Ryan Turner, Brady Neal
We present a new data-driven benchmark system to evaluate the performance of new MCMC samplers.
no code implementations • ICLR 2018 • David Krueger, Chin-wei Huang, Riashat Islam, Ryan Turner, Alexandre Lacoste, Aaron Courville
We study Bayesian hypernetworks: a framework for approximate Bayesian inference in neural networks.
no code implementations • 30 Jun 2016 • Ryan Turner
We propose a general model explanation system (MES) for "explaining" the output of black box classifiers.
no code implementations • 20 Mar 2012 • Marc Peter Deisenroth, Ryan Turner, Marco F. Huber, Uwe D. Hanebeck, Carl Edward Rasmussen
We propose a principled algorithm for robust Bayesian filtering and smoothing in nonlinear stochastic dynamic systems when both the transition function and the measurement function are described by non-parametric Gaussian process (GP) models.