2 code implementations • ICML 2017 • Luca Franceschi, Michele Donini, Paolo Frasconi, Massimiliano Pontil
We study two procedures (reverse-mode and forward-mode) for computing the gradient of the validation error with respect to the hyperparameters of any iterative learning algorithm such as stochastic gradient descent.
1 code implementation • 18 Dec 2017 • Luca Franceschi, Michele Donini, Paolo Frasconi, Massimiliano Pontil
We consider a class of a nested optimization problems involving inner and outer objectives.
2 code implementations • NeurIPS 2018 • Michele Donini, Luca Oneto, Shai Ben-David, John Shawe-Taylor, Massimiliano Pontil
It encourages the conditional risk of the learned classifier to be approximately constant with respect to the sensitive variable.
no code implementations • 19 Oct 2018 • Luca Oneto, Michele Donini, Amon Elders, Massimiliano Pontil
In this paper we show how it is possible to get the best of both worlds: optimize model accuracy and fairness without explicitly using the sensitive feature in the functional form of the model, thereby treating different individuals equally.
no code implementations • 29 Jan 2019 • Luca Oneto, Michele Donini, Massimiliano Pontil
We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of sensitive information, in the general context of regression with possible continuous sensitive attributes.
no code implementations • NeurIPS 2020 • Luca Oneto, Michele Donini, Andreas Maurer, Massimiliano Pontil
Developing learning methods which do not discriminate subgroups in the population is a central goal of algorithmic fairness.
no code implementations • 18 Sep 2019 • Cristina Cornelio, Michele Donini, Andrea Loreggia, Maria Silvia Pini, Francesca Rossi
In many machine learning scenarios, looking for the best classifier that fits a particular dataset can be very costly in terms of time and resources.
no code implementations • 25 Sep 2019 • Michele Donini, Luca Franceschi, Orchid Majumder, Massimiliano Pontil, Paolo Frasconi
We study the problem of fitting task-specific learning rate schedules from the perspective of hyperparameter optimization.
1 code implementation • 18 Oct 2019 • Michele Donini, Luca Franceschi, Massimiliano Pontil, Orchid Majumder, Paolo Frasconi
We study the problem of fitting task-specific learning rate schedules from the perspective of hyperparameter optimization, aiming at good generalization.
no code implementations • 9 Jun 2020 • Valerio Perrone, Michele Donini, Muhammad Bilal Zafar, Robin Schmucker, Krishnaram Kenthapadi, Cédric Archambeau
Moreover, our method can be used in synergy with such specialized fairness techniques to tune their hyperparameters.
no code implementations • NeurIPS 2020 • Luca Oneto, Michele Donini, Giulia Luise, Carlo Ciliberto, Andreas Maurer, Massimiliano Pontil
One way to reach this goal is by modifying the data representation in order to meet certain fairness constraints.
no code implementations • 15 Dec 2020 • Valerio Perrone, Huibin Shen, Aida Zolic, Iaroslav Shcherbatyi, Amr Ahmed, Tanya Bansal, Michele Donini, Fela Winkelmolen, Rodolphe Jenatton, Jean Baptiste Faddoul, Barbara Pogorzelska, Miroslav Miladinovic, Krishnaram Kenthapadi, Matthias Seeger, Cédric Archambeau
To democratize access to machine learning systems, it is essential to automate the tuning.
no code implementations • Findings (ACL) 2021 • Muhammad Bilal Zafar, Michele Donini, Dylan Slack, Cédric Archambeau, Sanjiv Das, Krishnaram Kenthapadi
With the ever-increasing complexity of neural language models, practitioners have turned to methods for understanding the predictions of these models.
2 code implementations • 23 Jun 2021 • Robin Schmucker, Michele Donini, Muhammad Bilal Zafar, David Salinas, Cédric Archambeau
Hyperparameter optimization (HPO) is increasingly used to automatically tune the predictive performance (e. g., accuracy) of machine learning models.
1 code implementation • 7 Sep 2021 • Michaela Hardt, Xiaoguang Chen, Xiaoyi Cheng, Michele Donini, Jason Gelman, Satish Gollaprolu, John He, Pedro Larroy, Xinyu Liu, Nick McCarthy, Ashish Rathi, Scott Rees, Ankit Siva, ErhYuan Tsai, Keerthan Vasist, Pinar Yilmaz, Muhammad Bilal Zafar, Sanjiv Das, Kevin Haas, Tyler Hill, Krishnaram Kenthapadi
We present Amazon SageMaker Clarify, an explainability feature for Amazon SageMaker that launched in December 2020, providing insights into data and ML models by identifying biases and explaining predictions.
no code implementations • 26 Nov 2021 • David Nigenda, Zohar Karnin, Muhammad Bilal Zafar, Raghu Ramesha, Alan Tan, Michele Donini, Krishnaram Kenthapadi
With the increasing adoption of machine learning (ML) models and systems in high-stakes settings across different industries, guaranteeing a model's performance after deployment has become crucial.
no code implementations • 23 Dec 2021 • Muhammad Bilal Zafar, Philipp Schmidt, Michele Donini, Cédric Archambeau, Felix Biessmann, Sanjiv Ranjan Das, Krishnaram Kenthapadi
The large size and complex decision mechanisms of state-of-the-art text classifiers make it difficult for humans to understand their predictions, leading to a potential lack of trust by the users.
no code implementations • 21 Mar 2022 • Deborah Sulem, Michele Donini, Muhammad Bilal Zafar, Francois-Xavier Aubet, Jan Gasthaus, Tim Januschowski, Sanjiv Das, Krishnaram Kenthapadi, Cedric Archambeau
In this work we propose a model-agnostic algorithm that generates counterfactual ensemble explanations for time series anomaly detection models.
1 code implementation • 8 Feb 2023 • Gianluca Detommaso, Alberto Gasparin, Michele Donini, Matthias Seeger, Andrew Gordon Wilson, Cedric Archambeau
We present Fortuna, an open-source library for uncertainty quantification in deep learning.
1 code implementation • 26 Feb 2023 • Matthäus Kleindessner, Michele Donini, Chris Russell, Muhammad Bilal Zafar
We revisit the problem of fair principal component analysis (PCA), where the goal is to learn the best low-rank linear approximation of the data that obfuscates demographic information.
1 code implementation • 23 Oct 2023 • Pola Schwöbel, Jacek Golebiowski, Michele Donini, Cédric Archambeau, Danish Pruthi
Large language models (LLMs) encode vast amounts of world knowledge.
no code implementations • 15 Feb 2024 • Luca Franceschi, Michele Donini, Cédric Archambeau, Matthias Seeger
We argue that often there is a critical mismatch between what one wishes to explain (e. g. the output of a classifier) and what current methods such as SHAP explain (e. g. the scalar probability of a class).