Search Results for author: Michele Donini

Found 22 papers, 9 papers with code

Forward and Reverse Gradient-Based Hyperparameter Optimization

2 code implementations ICML 2017 Luca Franceschi, Michele Donini, Paolo Frasconi, Massimiliano Pontil

We study two procedures (reverse-mode and forward-mode) for computing the gradient of the validation error with respect to the hyperparameters of any iterative learning algorithm such as stochastic gradient descent.

Hyperparameter Optimization

Empirical Risk Minimization under Fairness Constraints

2 code implementations NeurIPS 2018 Michele Donini, Luca Oneto, Shai Ben-David, John Shawe-Taylor, Massimiliano Pontil

It encourages the conditional risk of the learned classifier to be approximately constant with respect to the sensitive variable.

Fairness

Taking Advantage of Multitask Learning for Fair Classification

no code implementations19 Oct 2018 Luca Oneto, Michele Donini, Amon Elders, Massimiliano Pontil

In this paper we show how it is possible to get the best of both worlds: optimize model accuracy and fairness without explicitly using the sensitive feature in the functional form of the model, thereby treating different individuals equally.

Classification Decision Making +2

General Fair Empirical Risk Minimization

no code implementations29 Jan 2019 Luca Oneto, Michele Donini, Massimiliano Pontil

We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of sensitive information, in the general context of regression with possible continuous sensitive attributes.

Fairness regression

Learning Fair and Transferable Representations

no code implementations NeurIPS 2020 Luca Oneto, Michele Donini, Andreas Maurer, Massimiliano Pontil

Developing learning methods which do not discriminate subgroups in the population is a central goal of algorithmic fairness.

Fairness

Voting with Random Classifiers (VORACE): Theoretical and Experimental Analysis

no code implementations18 Sep 2019 Cristina Cornelio, Michele Donini, Andrea Loreggia, Maria Silvia Pini, Francesca Rossi

In many machine learning scenarios, looking for the best classifier that fits a particular dataset can be very costly in terms of time and resources.

Model Selection

MARTHE: Scheduling the Learning Rate Via Online Hypergradients

1 code implementation18 Oct 2019 Michele Donini, Luca Franceschi, Massimiliano Pontil, Orchid Majumder, Paolo Frasconi

We study the problem of fitting task-specific learning rate schedules from the perspective of hyperparameter optimization, aiming at good generalization.

Hyperparameter Optimization Scheduling

Fair Bayesian Optimization

no code implementations9 Jun 2020 Valerio Perrone, Michele Donini, Muhammad Bilal Zafar, Robin Schmucker, Krishnaram Kenthapadi, Cédric Archambeau

Moreover, our method can be used in synergy with such specialized fairness techniques to tune their hyperparameters.

Bayesian Optimization Fairness

On the Lack of Robust Interpretability of Neural Text Classifiers

no code implementations Findings (ACL) 2021 Muhammad Bilal Zafar, Michele Donini, Dylan Slack, Cédric Archambeau, Sanjiv Das, Krishnaram Kenthapadi

With the ever-increasing complexity of neural language models, practitioners have turned to methods for understanding the predictions of these models.

Multi-objective Asynchronous Successive Halving

2 code implementations23 Jun 2021 Robin Schmucker, Michele Donini, Muhammad Bilal Zafar, David Salinas, Cédric Archambeau

Hyperparameter optimization (HPO) is increasingly used to automatically tune the predictive performance (e. g., accuracy) of machine learning models.

Fairness Hyperparameter Optimization +3

Amazon SageMaker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models

no code implementations26 Nov 2021 David Nigenda, Zohar Karnin, Muhammad Bilal Zafar, Raghu Ramesha, Alan Tan, Michele Donini, Krishnaram Kenthapadi

With the increasing adoption of machine learning (ML) models and systems in high-stakes settings across different industries, guaranteeing a model's performance after deployment has become crucial.

BIG-bench Machine Learning

More Than Words: Towards Better Quality Interpretations of Text Classifiers

no code implementations23 Dec 2021 Muhammad Bilal Zafar, Philipp Schmidt, Michele Donini, Cédric Archambeau, Felix Biessmann, Sanjiv Ranjan Das, Krishnaram Kenthapadi

The large size and complex decision mechanisms of state-of-the-art text classifiers make it difficult for humans to understand their predictions, leading to a potential lack of trust by the users.

Feature Importance Sentence

Efficient fair PCA for fair representation learning

1 code implementation26 Feb 2023 Matthäus Kleindessner, Michele Donini, Chris Russell, Muhammad Bilal Zafar

We revisit the problem of fair principal component analysis (PCA), where the goal is to learn the best low-rank linear approximation of the data that obfuscates demographic information.

Representation Learning

Explaining Probabilistic Models with Distributional Values

no code implementations15 Feb 2024 Luca Franceschi, Michele Donini, Cédric Archambeau, Matthias Seeger

We argue that often there is a critical mismatch between what one wishes to explain (e. g. the output of a classifier) and what current methods such as SHAP explain (e. g. the scalar probability of a class).

Cannot find the paper you are looking for? You can Submit a new open access paper.