Search Results for author: Michele Donini

Found 14 papers, 7 papers with code

Multi-objective Asynchronous Successive Halving

1 code implementation23 Jun 2021 Robin Schmucker, Michele Donini, Muhammad Bilal Zafar, David Salinas, Cédric Archambeau

Hyperparameter optimization (HPO) is increasingly used to automatically tune the predictive performance (e. g., accuracy) of machine learning models.

Fairness Hyperparameter Optimization +2

On the Lack of Robust Interpretability of Neural Text Classifiers

no code implementations8 Jun 2021 Muhammad Bilal Zafar, Michele Donini, Dylan Slack, Cédric Archambeau, Sanjiv Das, Krishnaram Kenthapadi

With the ever-increasing complexity of neural language models, practitioners have turned to methods for understanding the predictions of these models.

Fair Bayesian Optimization

1 code implementation9 Jun 2020 Valerio Perrone, Michele Donini, Muhammad Bilal Zafar, Robin Schmucker, Krishnaram Kenthapadi, Cédric Archambeau

Moreover, our method can be used in synergy with such specialized fairness techniques to tune their hyperparameters.

Fairness

MARTHE: Scheduling the Learning Rate Via Online Hypergradients

1 code implementation18 Oct 2019 Michele Donini, Luca Franceschi, Massimiliano Pontil, Orchid Majumder, Paolo Frasconi

We study the problem of fitting task-specific learning rate schedules from the perspective of hyperparameter optimization, aiming at good generalization.

Hyperparameter Optimization

Voting with Random Classifiers (VORACE): Theoretical and Experimental Analysis

no code implementations18 Sep 2019 Cristina Cornelio, Michele Donini, Andrea Loreggia, Maria Silvia Pini, Francesca Rossi

In many machine learning scenarios, looking for the best classifier that fits a particular dataset can be very costly in terms of time and resources.

Model Selection

Learning Fair and Transferable Representations

no code implementations NeurIPS 2020 Luca Oneto, Michele Donini, Andreas Maurer, Massimiliano Pontil

Developing learning methods which do not discriminate subgroups in the population is a central goal of algorithmic fairness.

Fairness

General Fair Empirical Risk Minimization

no code implementations29 Jan 2019 Luca Oneto, Michele Donini, Massimiliano Pontil

We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of sensitive information, in the general context of regression with possible continuous sensitive attributes.

Fairness

Taking Advantage of Multitask Learning for Fair Classification

no code implementations19 Oct 2018 Luca Oneto, Michele Donini, Amon Elders, Massimiliano Pontil

In this paper we show how it is possible to get the best of both worlds: optimize model accuracy and fairness without explicitly using the sensitive feature in the functional form of the model, thereby treating different individuals equally.

Classification Decision Making +2

Empirical Risk Minimization under Fairness Constraints

2 code implementations NeurIPS 2018 Michele Donini, Luca Oneto, Shai Ben-David, John Shawe-Taylor, Massimiliano Pontil

It encourages the conditional risk of the learned classifier to be approximately constant with respect to the sensitive variable.

Fairness

Forward and Reverse Gradient-Based Hyperparameter Optimization

2 code implementations ICML 2017 Luca Franceschi, Michele Donini, Paolo Frasconi, Massimiliano Pontil

We study two procedures (reverse-mode and forward-mode) for computing the gradient of the validation error with respect to the hyperparameters of any iterative learning algorithm such as stochastic gradient descent.

Hyperparameter Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.