Search Results for author: Valerio Perrone

Found 21 papers, 4 papers with code

A Simple and Fast Baseline for Tuning Large XGBoost Models

no code implementations12 Nov 2021 Sanyam Kapoor, Valerio Perrone

XGBoost, a scalable tree boosting algorithm, has proven effective for many prediction tasks of practical interest, especially using tabular datasets.

Hyperparameter Optimization

A multi-objective perspective on jointly tuning hardware and hyperparameters

no code implementations10 Jun 2021 David Salinas, Valerio Perrone, Olivier Cruchant, Cedric Archambeau

In three benchmarks where hardware is selected in addition to hyperparameters, we obtain runtime and cost reductions of at least 5. 8x and 8. 8x, respectively.

AutoML Transfer Learning

Automatic Termination for Hyperparameter Optimization

1 code implementation16 Apr 2021 Anastasia Makarova, Huibin Shen, Valerio Perrone, Aaron Klein, Jean Baptiste Faddoul, Andreas Krause, Matthias Seeger, Cedric Archambeau

Across an extensive range of real-world HPO problems and baselines, we show that our termination criterion achieves a better trade-off between the test performance and optimization time.

Bayesian Optimization Hyperparameter Optimization

Lexical semantic change for Ancient Greek and Latin

1 code implementation22 Jan 2021 Valerio Perrone, Simon Hengchen, Marco Palma, Alessandro Vatri, Jim Q. Smith, Barbara McGillivray

In this chapter we build on GASC, a recent computational approach to semantic change based on a dynamic Bayesian mixture model.

Fair Bayesian Optimization

no code implementations9 Jun 2020 Valerio Perrone, Michele Donini, Muhammad Bilal Zafar, Robin Schmucker, Krishnaram Kenthapadi, Cédric Archambeau

Moreover, our method can be used in synergy with such specialized fairness techniques to tune their hyperparameters.

Bayesian Optimization Fairness

Cost-aware Bayesian Optimization

no code implementations22 Mar 2020 Eric Hans Lee, Valerio Perrone, Cedric Archambeau, Matthias Seeger

Bayesian optimization (BO) is a class of global optimization algorithms, suitable for minimizing an expensive objective function in as few function evaluations as possible.

Bayesian Optimization

Constrained Bayesian Optimization with Max-Value Entropy Search

no code implementations15 Oct 2019 Valerio Perrone, Iaroslav Shcherbatyi, Rodolphe Jenatton, Cedric Archambeau, Matthias Seeger

We propose constrained Max-value Entropy Search (cMES), a novel information theoretic-based acquisition function implementing this formulation.

Bayesian Optimization Hyperparameter Optimization

A Quantile-based Approach for Hyperparameter Transfer Learning

no code implementations ICML 2020 David Salinas, Huibin Shen, Valerio Perrone

In this work, we introduce a novel approach to achieve transfer learning across different \emph{datasets} as well as different \emph{objectives}.

Bayesian Optimization Hyperparameter Optimization +3

Learning search spaces for Bayesian optimization: Another view of hyperparameter transfer learning

no code implementations NeurIPS 2019 Valerio Perrone, Huibin Shen, Matthias Seeger, Cedric Archambeau, Rodolphe Jenatton

Despite its simplicity, we show that our approach considerably boosts BO by reducing the size of the search space, thus accelerating the optimization of a variety of black-box optimization problems.

Bayesian Optimization Hyperparameter Optimization +1

A Copula approach for hyperparameter transfer learning

no code implementations25 Sep 2019 David Salinas, Huibin Shen, Valerio Perrone

In this work, we introduce a novel approach to achieve transfer learning across different datasets as well as different metrics.

Bayesian Optimization Thompson Sampling +1

Scalable Hyperparameter Transfer Learning

no code implementations NeurIPS 2018 Valerio Perrone, Rodolphe Jenatton, Matthias W. Seeger, Cedric Archambeau

Bayesian optimization (BO) is a model-based approach for gradient-free black-box function optimization, such as hyperparameter optimization.

Bayesian Optimization Hyperparameter Optimization +2

A Likelihood-Free Inference Framework for Population Genetic Data using Exchangeable Neural Networks

1 code implementation NeurIPS 2018 Jeffrey Chan, Valerio Perrone, Jeffrey P. Spence, Paul A. Jenkins, Sara Mathieson, Yun S. Song

To achieve this, two inferential challenges need to be addressed: (1) population data are exchangeable, calling for methods that efficiently exploit the symmetries of the data, and (2) computing likelihoods is intractable as it requires integrating over a set of correlated, extremely high-dimensional latent variables.

Poisson Random Fields for Dynamic Feature Models

no code implementations22 Nov 2016 Valerio Perrone, Paul A. Jenkins, Dario Spano, Yee Whye Teh

We present the Wright-Fisher Indian buffet process (WF-IBP), a probabilistic model for time-dependent data assumed to have been generated by an unknown number of latent features.

Relativistic Monte Carlo

no code implementations14 Sep 2016 Xiaoyu Lu, Valerio Perrone, Leonard Hasenclever, Yee Whye Teh, Sebastian J. Vollmer

Based on this, we develop relativistic stochastic gradient descent by taking the zero-temperature limit of relativistic stochastic gradient Hamiltonian Monte Carlo.

Cannot find the paper you are looking for? You can Submit a new open access paper.