no code implementations • 12 Nov 2021 • Sanyam Kapoor, Valerio Perrone
XGBoost, a scalable tree boosting algorithm, has proven effective for many prediction tasks of practical interest, especially using tabular datasets.
1 code implementation • 10 Jun 2021 • Eric Hans Lee, David Eriksson, Valerio Perrone, Matthias Seeger
Bayesian optimization (BO) is a popular method for optimizing expensive-to-evaluate black-box functions.
no code implementations • 10 Jun 2021 • David Salinas, Valerio Perrone, Olivier Cruchant, Cedric Archambeau
In three benchmarks where hardware is selected in addition to hyperparameters, we obtain runtime and cost reductions of at least 5. 8x and 8. 8x, respectively.
1 code implementation • 16 Apr 2021 • Anastasia Makarova, Huibin Shen, Valerio Perrone, Aaron Klein, Jean Baptiste Faddoul, Andreas Krause, Matthias Seeger, Cedric Archambeau
Across an extensive range of real-world HPO problems and baselines, we show that our termination criterion achieves a better trade-off between the test performance and optimization time.
1 code implementation • 22 Jan 2021 • Valerio Perrone, Simon Hengchen, Marco Palma, Alessandro Vatri, Jim Q. Smith, Barbara McGillivray
In this chapter we build on GASC, a recent computational approach to semantic change based on a dynamic Bayesian mixture model.
no code implementations • 15 Dec 2020 • Valerio Perrone, Huibin Shen, Aida Zolic, Iaroslav Shcherbatyi, Amr Ahmed, Tanya Bansal, Michele Donini, Fela Winkelmolen, Rodolphe Jenatton, Jean Baptiste Faddoul, Barbara Pogorzelska, Miroslav Miladinovic, Krishnaram Kenthapadi, Matthias Seeger, Cédric Archambeau
To democratize access to machine learning systems, it is essential to automate the tuning.
no code implementations • 15 Dec 2020 • Piali Das, Valerio Perrone, Nikita Ivkin, Tanya Bansal, Zohar Karnin, Huibin Shen, Iaroslav Shcherbatyi, Yotam Elor, Wilton Wu, Aida Zolic, Thibaut Lienart, Alex Tang, Amr Ahmed, Jean Baptiste Faddoul, Rodolphe Jenatton, Fela Winkelmolen, Philip Gautier, Leo Dirac, Andre Perunicic, Miroslav Miladinovic, Giovanni Zappella, Cédric Archambeau, Matthias Seeger, Bhaskar Dutt, Laurence Rouesnel
AutoML systems provide a black-box solution to machine learning problems by selecting the right way of processing features, choosing an algorithm and tuning the hyperparameters of the entire pipeline.
no code implementations • 23 Nov 2020 • Gauthier Guinet, Valerio Perrone, Cédric Archambeau
Bayesian optimization (BO) is a popular method to optimize expensive black-box functions.
no code implementations • pproximateinference AABI Symposium 2021 • Théo Galy-Fajou, Valerio Perrone, Manfred Opper
Bayesian inference is intractable for most practical problems and requires approximation schemes with several trade-offs.
no code implementations • 9 Jun 2020 • Valerio Perrone, Michele Donini, Muhammad Bilal Zafar, Robin Schmucker, Krishnaram Kenthapadi, Cédric Archambeau
Moreover, our method can be used in synergy with such specialized fairness techniques to tune their hyperparameters.
no code implementations • 22 Mar 2020 • Eric Hans Lee, Valerio Perrone, Cedric Archambeau, Matthias Seeger
Bayesian optimization (BO) is a class of global optimization algorithms, suitable for minimizing an expensive objective function in as few function evaluations as possible.
no code implementations • 15 Oct 2019 • Valerio Perrone, Iaroslav Shcherbatyi, Rodolphe Jenatton, Cedric Archambeau, Matthias Seeger
We propose constrained Max-value Entropy Search (cMES), a novel information theoretic-based acquisition function implementing this formulation.
no code implementations • ICML 2020 • David Salinas, Huibin Shen, Valerio Perrone
In this work, we introduce a novel approach to achieve transfer learning across different \emph{datasets} as well as different \emph{objectives}.
no code implementations • NeurIPS 2019 • Valerio Perrone, Huibin Shen, Matthias Seeger, Cedric Archambeau, Rodolphe Jenatton
Despite its simplicity, we show that our approach considerably boosts BO by reducing the size of the search space, thus accelerating the optimization of a variety of black-box optimization problems.
no code implementations • 25 Sep 2019 • David Salinas, Huibin Shen, Valerio Perrone
In this work, we introduce a novel approach to achieve transfer learning across different datasets as well as different metrics.
no code implementations • WS 2019 • Valerio Perrone, Marco Palma, Simon Hengchen, Alessandro Vatri, Jim Q. Smith, Barbara McGillivray
Word meaning changes over time, depending on linguistic and extra-linguistic factors.
no code implementations • NeurIPS 2018 • Valerio Perrone, Rodolphe Jenatton, Matthias W. Seeger, Cedric Archambeau
Bayesian optimization (BO) is a model-based approach for gradient-free black-box function optimization, such as hyperparameter optimization.
1 code implementation • NeurIPS 2018 • Jeffrey Chan, Valerio Perrone, Jeffrey P. Spence, Paul A. Jenkins, Sara Mathieson, Yun S. Song
To achieve this, two inferential challenges need to be addressed: (1) population data are exchangeable, calling for methods that efficiently exploit the symmetries of the data, and (2) computing likelihoods is intractable as it requires integrating over a set of correlated, extremely high-dimensional latent variables.
no code implementations • 8 Dec 2017 • Valerio Perrone, Rodolphe Jenatton, Matthias Seeger, Cedric Archambeau
Bayesian optimization (BO) is a model-based approach for gradient-free black-box function optimization.
no code implementations • 22 Nov 2016 • Valerio Perrone, Paul A. Jenkins, Dario Spano, Yee Whye Teh
We present the Wright-Fisher Indian buffet process (WF-IBP), a probabilistic model for time-dependent data assumed to have been generated by an unknown number of latent features.
no code implementations • 14 Sep 2016 • Xiaoyu Lu, Valerio Perrone, Leonard Hasenclever, Yee Whye Teh, Sebastian J. Vollmer
Based on this, we develop relativistic stochastic gradient descent by taking the zero-temperature limit of relativistic stochastic gradient Hamiltonian Monte Carlo.