Search Results for author: Valerio Perrone

Found 18 papers, 3 papers with code

A Nonmyopic Approach to Cost-Constrained Bayesian Optimization

1 code implementation10 Jun 2021 Eric Hans Lee, David Eriksson, Valerio Perrone, Matthias Seeger

Bayesian optimization (BO) is a popular method for optimizing expensive-to-evaluate black-box functions.

Hyperparameter Optimization

A multi-objective perspective on jointly tuning hardware and hyperparameters

no code implementations10 Jun 2021 David Salinas, Valerio Perrone, Olivier Cruchant, Cedric Archambeau

In three benchmarks where hardware is selected in addition to hyperparameters, we obtain runtime and cost reductions of at least 5. 8x and 8. 8x, respectively.

AutoML Transfer Learning

Overfitting in Bayesian Optimization: an empirical study and early-stopping solution

no code implementations16 Apr 2021 Anastasia Makarova, Huibin Shen, Valerio Perrone, Aaron Klein, Jean Baptiste Faddoul, Andreas Krause, Matthias Seeger, Cedric Archambeau

In practice, however, an improvement of the validation metric may not translate in better predictive performance on a test set, especially when tuning models trained on small datasets.

Hyperparameter Optimization

Lexical semantic change for Ancient Greek and Latin

no code implementations22 Jan 2021 Valerio Perrone, Simon Hengchen, Marco Palma, Alessandro Vatri, Jim Q. Smith, Barbara McGillivray

In this chapter we build on GASC, a recent computational approach to semantic change based on a dynamic Bayesian mixture model.

Pareto-efficient Acquisition Functions for Cost-Aware Bayesian Optimization

no code implementations23 Nov 2020 Gauthier Guinet, Valerio Perrone, Cédric Archambeau

Bayesian optimization (BO) is a popular method to optimize expensive black-box functions.

Fair Bayesian Optimization

1 code implementation9 Jun 2020 Valerio Perrone, Michele Donini, Muhammad Bilal Zafar, Robin Schmucker, Krishnaram Kenthapadi, Cédric Archambeau

Moreover, our method can be used in synergy with such specialized fairness techniques to tune their hyperparameters.


Cost-aware Bayesian Optimization

no code implementations22 Mar 2020 Eric Hans Lee, Valerio Perrone, Cedric Archambeau, Matthias Seeger

Bayesian optimization (BO) is a class of global optimization algorithms, suitable for minimizing an expensive objective function in as few function evaluations as possible.

Global Optimization

Constrained Bayesian Optimization with Max-Value Entropy Search

no code implementations15 Oct 2019 Valerio Perrone, Iaroslav Shcherbatyi, Rodolphe Jenatton, Cedric Archambeau, Matthias Seeger

We propose constrained Max-value Entropy Search (cMES), a novel information theoretic-based acquisition function implementing this formulation.

Hyperparameter Optimization

A Quantile-based Approach for Hyperparameter Transfer Learning

no code implementations ICML 2020 David Salinas, Huibin Shen, Valerio Perrone

In this work, we introduce a novel approach to achieve transfer learning across different \emph{datasets} as well as different \emph{objectives}.

Hyperparameter Optimization Neural Architecture Search +1

Learning search spaces for Bayesian optimization: Another view of hyperparameter transfer learning

no code implementations NeurIPS 2019 Valerio Perrone, Huibin Shen, Matthias Seeger, Cedric Archambeau, Rodolphe Jenatton

Despite its simplicity, we show that our approach considerably boosts BO by reducing the size of the search space, thus accelerating the optimization of a variety of black-box optimization problems.

Hyperparameter Optimization Transfer Learning

Scalable Hyperparameter Transfer Learning

no code implementations NeurIPS 2018 Valerio Perrone, Rodolphe Jenatton, Matthias W. Seeger, Cedric Archambeau

Bayesian optimization (BO) is a model-based approach for gradient-free black-box function optimization, such as hyperparameter optimization.

Hyperparameter Optimization Transfer Learning

A Likelihood-Free Inference Framework for Population Genetic Data using Exchangeable Neural Networks

1 code implementation NeurIPS 2018 Jeffrey Chan, Valerio Perrone, Jeffrey P. Spence, Paul A. Jenkins, Sara Mathieson, Yun S. Song

To achieve this, two inferential challenges need to be addressed: (1) population data are exchangeable, calling for methods that efficiently exploit the symmetries of the data, and (2) computing likelihoods is intractable as it requires integrating over a set of correlated, extremely high-dimensional latent variables.

Poisson Random Fields for Dynamic Feature Models

no code implementations22 Nov 2016 Valerio Perrone, Paul A. Jenkins, Dario Spano, Yee Whye Teh

We present the Wright-Fisher Indian buffet process (WF-IBP), a probabilistic model for time-dependent data assumed to have been generated by an unknown number of latent features.

Relativistic Monte Carlo

no code implementations14 Sep 2016 Xiaoyu Lu, Valerio Perrone, Leonard Hasenclever, Yee Whye Teh, Sebastian J. Vollmer

Based on this, we develop relativistic stochastic gradient descent by taking the zero-temperature limit of relativistic stochastic gradient Hamiltonian Monte Carlo.

Cannot find the paper you are looking for? You can Submit a new open access paper.