no code implementations • 27 Feb 2024 • Daniele Angioni, Luca Demetrio, Maura Pintor, Luca Oneto, Davide Anguita, Battista Biggio, Fabio Roli
In this work, we show that this problem also affects robustness to adversarial examples, thereby hindering the development of secure model update practices.
no code implementations • 31 Dec 2020 • Luca Oneto, Silvia Chiappa
Machine learning based systems are reaching society at large and in many aspects of everyday life.
no code implementations • NeurIPS 2020 • Luca Oneto, Michele Donini, Giulia Luise, Carlo Ciliberto, Andreas Maurer, Massimiliano Pontil
One way to reach this goal is by modifying the data representation in order to meet certain fairness constraints.
no code implementations • NeurIPS 2020 • Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Luca Oneto, Massimiliano Pontil
We study the problem of learning an optimal regression function subject to a fairness constraint.
no code implementations • NeurIPS 2020 • Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Luca Oneto, Massimiliano Pontil
It demands the distribution of the predicted output to be independent of the sensitive attribute.
no code implementations • NeurIPS 2020 • Luca Oneto, Michele Donini, Andreas Maurer, Massimiliano Pontil
Developing learning methods which do not discriminate subgroups in the population is a central goal of algorithmic fairness.
1 code implementation • NeurIPS 2019 • Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Luca Oneto, Massimiliano Pontil
We study the problem of fair binary classification using the notion of Equal Opportunity.
no code implementations • 29 Jan 2019 • Luca Oneto, Michele Donini, Massimiliano Pontil
We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of sensitive information, in the general context of regression with possible continuous sensitive attributes.
no code implementations • 19 Oct 2018 • Luca Oneto, Michele Donini, Amon Elders, Massimiliano Pontil
In this paper we show how it is possible to get the best of both worlds: optimize model accuracy and fairness without explicitly using the sensitive feature in the functional form of the model, thereby treating different individuals equally.
2 code implementations • NeurIPS 2018 • Michele Donini, Luca Oneto, Shai Ben-David, John Shawe-Taylor, Massimiliano Pontil
It encourages the conditional risk of the learned classifier to be approximately constant with respect to the sensitive variable.
no code implementations • NeurIPS 2011 • Luca Oneto, Davide Anguita, Alessandro Ghio, Sandro Ridella
We derive here new generalization bounds, based on Rademacher Complexity theory, for model selection and error estimation of linear (kernel) classifiers, which exploit the availability of unlabeled samples.