Search Results for author: Cédric Archambeau

Found 14 papers, 2 papers with code

Multi-objective Asynchronous Successive Halving

1 code implementation23 Jun 2021 Robin Schmucker, Michele Donini, Muhammad Bilal Zafar, David Salinas, Cédric Archambeau

Hyperparameter optimization (HPO) is increasingly used to automatically tune the predictive performance (e. g., accuracy) of machine learning models.

Fairness Hyperparameter Optimization +2

On the Lack of Robust Interpretability of Neural Text Classifiers

no code implementations8 Jun 2021 Muhammad Bilal Zafar, Michele Donini, Dylan Slack, Cédric Archambeau, Sanjiv Das, Krishnaram Kenthapadi

With the ever-increasing complexity of neural language models, practitioners have turned to methods for understanding the predictions of these models.

Hyperparameter Transfer Learning with Adaptive Complexity

no code implementations25 Feb 2021 Samuel Horváth, Aaron Klein, Peter Richtárik, Cédric Archambeau

Bayesian optimization (BO) is a sample efficient approach to automatically tune the hyperparameters of machine learning models.

Decision Making Transfer Learning

Pareto-efficient Acquisition Functions for Cost-Aware Bayesian Optimization

no code implementations23 Nov 2020 Gauthier Guinet, Valerio Perrone, Cédric Archambeau

Bayesian optimization (BO) is a popular method to optimize expensive black-box functions.

Fair Bayesian Optimization

1 code implementation9 Jun 2020 Valerio Perrone, Michele Donini, Muhammad Bilal Zafar, Robin Schmucker, Krishnaram Kenthapadi, Cédric Archambeau

Moreover, our method can be used in synergy with such specialized fairness techniques to tune their hyperparameters.


An interpretable latent variable model for attribute applicability in the Amazon catalogue

no code implementations30 Nov 2017 Tammo Rukat, Dustin Lange, Cédric Archambeau

Learning attribute applicability of products in the Amazon catalog (e. g., predicting that a shoe should have a value for size, but not for battery-type at scale is a challenge.

Adaptive Algorithms for Online Convex Optimization with Long-term Constraints

no code implementations23 Dec 2015 Rodolphe Jenatton, Jim Huang, Cédric Archambeau

We present an adaptive online gradient descent algorithm to solve online convex optimization problems with long-term constraints , which are constraints that need to be satisfied when accumulated over a finite number of rounds T , but can be violated in intermediate rounds.

Online Inference for Relation Extraction with a Reduced Feature Set

no code implementations18 Apr 2015 Maxim Rabinovich, Cédric Archambeau

Access to web-scale corpora is gradually bringing robust automatic knowledge base creation and extension within reach.

Relation Extraction Variational Inference

Sparse probabilistic projections

no code implementations NeurIPS 2008 Cédric Archambeau, Francis R. Bach

We present a generative model for performing sparse probabilistic projections, which includes sparse principal component analysis and sparse canonical correlation analysis as special cases.

Variational Inference for Diffusion Processes

no code implementations NeurIPS 2007 Cédric Archambeau, Manfred Opper, Yuan Shen, Dan Cornford, John S. Shawe-Taylor

Diffusion processes are a family of continuous-time continuous-state stochastic processes that are in general only partially observed.

Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.