no code implementations • 6 Jul 2024 • Yehu Chen, Muchen Xi, Jacob Montgomery, Joshua Jackson, Roman Garnett
In a third study, we show that IPGP also identifies unique clusters of personality taxonomies in real-world data, displaying great potential in advancing individualized approaches to psychological diagnosis and treatment.
no code implementations • 23 May 2024 • Quan Nguyen, Anindya Sarkar, Roman Garnett
Active search formalizes a specialized active learning setting where the goal is to collect members of a rare, valuable class.
1 code implementation • NeurIPS 2023 • Kaiwen Wu, Kyurae Kim, Roman Garnett, Jacob R. Gardner
A recent development in Bayesian optimization is the use of local optimization strategies, which can deliver strong empirical performance on high-dimensional problems compared to traditional global strategies.
1 code implementation • 28 Nov 2022 • Anindya Sarkar, Michael Lanier, Scott Alfeld, Jiarui Feng, Roman Garnett, Nathan Jacobs, Yevgeniy Vorobeychik
Many problems can be viewed as forms of geospatial search aided by aerial imagery, with examples ranging from detecting poaching activity to human trafficking.
1 code implementation • 21 Oct 2022 • Quan Nguyen, Kaiwen Wu, Jacob R. Gardner, Roman Garnett
Local optimization presents a promising approach to expensive, high-dimensional black-box optimization by sidestepping the need to globally explore the search space.
1 code implementation • 9 Aug 2022 • Sunwoo Ha, Shayan Monadjemi, Roman Garnett, Alvitta Ottley
Our paper seeks to fill in this missing gap by comparing and ranking eight user modeling algorithms based on their performance on a diverse set of four user study datasets.
no code implementations • 8 Feb 2022 • Quan Nguyen, Roman Garnett
Active search is a setting in adaptive experimental design where we aim to uncover members of rare, valuable class(es) subject to a budget constraint.
1 code implementation • 11 Jun 2021 • Quan Nguyen, Arghavan Modiri, Roman Garnett
Active search is a learning paradigm where we seek to identify as many members of a rare, valuable class as possible given a labeling budget.
1 code implementation • NeurIPS 2020 • Shali Jiang, Daniel R. Jiang, Maximilian Balandat, Brian Karrer, Jacob R. Gardner, Roman Garnett
In this paper, we provide the first efficient implementation of general multi-step lookahead Bayesian optimization, formulated as a sequence of nested optimization problems within a multi-step scenario tree.
no code implementations • 17 Jun 2020 • JBrandon Duck-Mayr, Roman Garnett, Jacob M. Montgomery
This allows us to simultaneously relax assumptions about the shape of the IRFs while preserving the ability to estimate latent traits.
3 code implementations • 12 Jun 2020 • Leah Fauber, Ming-Feng Ho, Simeon Bird, Christian R. Shelton, Roman Garnett, Ishita Korde
Our technique is an extension of an earlier Gaussian process method for detecting damped Lyman-alpha absorbers (DLAs) in quasar spectra with known redshifts.
Astrophysics of Galaxies Instrumentation and Methods for Astrophysics
1 code implementation • 24 Mar 2020 • Ming-Feng Ho, Simeon Bird, Roman Garnett
We present a revised version of our automated technique using Gaussian processes (GPs) to detect Damped Lyman-$\alpha$ absorbers (DLAs) along quasar (QSO) sightlines.
Cosmology and Nongalactic Astrophysics Astrophysics of Galaxies Data Analysis, Statistics and Probability
1 code implementation • NeurIPS 2019 • Shali Jiang, Roman Garnett, Benjamin Moseley
We study a special paradigm of active learning, called cost effective active search, where the goal is to find a given number of positive points from a large unlabeled pool with minimum labeling cost.
1 code implementation • ICML 2020 • Shali Jiang, Henry Chai, Javier Gonzalez, Roman Garnett
Finite-horizon sequential experimental design (SED) arises naturally in many contexts, including hyperparameter tuning in machine learning among more traditional settings.
2 code implementations • NeurIPS 2019 • Muhan Zhang, Shali Jiang, Zhicheng Cui, Roman Garnett, Yixin Chen
Graph structured data are abundant in the real world.
no code implementations • 26 Feb 2019 • Henry Chai, Jean-Francois Ton, Roman Garnett, Michael A. Osborne
We present a novel technique for tailoring Bayesian quadrature (BQ) to model selection.
1 code implementation • Machine Learning 2019 • Marion Neumann, Roman Garnett, Christian Bauckhage, Kristian Kersting
We introduce propagation kernels, a general graph-kernel framework for efficiently measuring the similarity of structured data.
Ranked #11 on
Graph Classification
on NCI109
no code implementations • NeurIPS 2018 • Gustavo Malkomes, Roman Garnett
Bayesian optimization is a powerful tool for global optimization of expensive functions.
no code implementations • NeurIPS 2018 • Shali Jiang, Gustavo Malkomes, Matthew Abbott, Benjamin Moseley, Roman Garnett
A critical target scenario is high-throughput screening for scientific discovery, such as drug or materials discovery.
no code implementations • 21 Nov 2018 • Shali Jiang, Gustavo Malkomes, Benjamin Moseley, Roman Garnett
We also study the batch setting for the first time, where a batch of $b>1$ points can be queried at each iteration.
1 code implementation • 13 Feb 2018 • Henry Chai, Roman Garnett
We focus on quadrature with nonnegative functions, a common task in Bayesian inference.
no code implementations • ICML 2017 • Shali Jiang, Gustavo Malkomes, Geoff Converse, Alyssa Shofner, Benjamin Moseley, Roman Garnett
Active search is an active learning setting with the goal of identifying as many members of a given class as possible under a labeling budget.
no code implementations • 2 Dec 2016 • Yifei Ma, Roman Garnett, Jeff Schneider
Autonomous systems can be used to search for sparse signals in a large space; e. g., aerial robots can be deployed to localize threats, detect gas leaks, or respond to distress calls.
no code implementations • NeurIPS 2016 • Gustavo Malkomes, Charles Schaff, Roman Garnett
Despite the success of kernel-based nonparametric methods, kernel selection still requires considerable expertise, and is often described as a “black art.” We present a sophisticated method for automatically searching for an appropriate kernel from an infinite space of potential choices.
no code implementations • 22 Sep 2016 • Philipp Hennig, Roman Garnett
Determinantal point processes (DPPs) are an important concept in random matrix theory and combinatorics.
4 code implementations • 14 May 2016 • Roman Garnett, Shirley Ho, Simeon Bird, Jeff Schneider
We develop an automated technique for detecting damped Lyman-$\alpha$ absorbers (DLAs) along spectroscopic lines of sight to quasi-stellar objects (QSOs or quasars).
Cosmology and Nongalactic Astrophysics Data Analysis, Statistics and Probability
no code implementations • NeurIPS 2015 • Jacob Gardner, Gustavo Malkomes, Roman Garnett, Kilian Q. Weinberger, Dennis Barbour, John P. Cunningham
Using this and a previously published model for healthy responses, the proposed method is shown to be capable of diagnosing the presence or absence of NIHL with drastically fewer samples than existing approaches.
no code implementations • 2 Jul 2015 • Steven Reece, Roman Garnett, Michael Osborne, Stephen Roberts
This paper proposes a novel Gaussian process approach to fault removal in time-series data.
no code implementations • 16 Jan 2015 • Matt J. Kusner, Jacob R. Gardner, Roman Garnett, Kilian Q. Weinberger
The success of machine learning has led practitioners in diverse real-world settings to learn classifiers for practical problems.
no code implementations • NeurIPS 2014 • Tom Gunter, Michael A. Osborne, Roman Garnett, Philipp Hennig, Stephen J. Roberts
We propose a novel sampling framework for inference in probabilistic models: an active learning approach that converges more quickly (in wall-clock time) than Markov chain Monte Carlo (MCMC) benchmarks.
1 code implementation • 13 Oct 2014 • Marion Neumann, Roman Garnett, Christian Bauckhage, Kristian Kersting
We introduce propagation kernels, a general graph-kernel framework for efficiently measuring the similarity of structured data.
no code implementations • NeurIPS 2013 • Yifei Ma, Roman Garnett, Jeff Schneider
For active learning on GRFs, the commonly used V-optimality criterion queries nodes that reduce the L2 (regression) loss.
no code implementations • 24 Oct 2013 • Roman Garnett, Michael A. Osborne, Philipp Hennig
We propose an active learning method for discovering low-dimensional structure in high-dimensional Gaussian process (GP) tasks.
no code implementations • NeurIPS 2012 • Michael Osborne, Roman Garnett, Zoubin Ghahramani, David K. Duvenaud, Stephen J. Roberts, Carl E. Rasmussen
Numerical integration is an key component of many problems in scientific computing, statistical modelling, and machine learning.
no code implementations • 27 Jun 2012 • Roman Garnett, Yamuna Krishnamurthy, Xuehan Xiong, Jeff Schneider, Richard Mann
In the second, active surveying, our goal is to actively query points to ultimately predict the proportion of a given class.