Search Results for author: Luca Oneto

Found 11 papers, 2 papers with code

Robustness-Congruent Adversarial Training for Secure Machine Learning Model Updates

no code implementations27 Feb 2024 Daniele Angioni, Luca Demetrio, Maura Pintor, Luca Oneto, Davide Anguita, Battista Biggio, Fabio Roli

In this work, we show that this problem also affects robustness to adversarial examples, thereby hindering the development of secure model update practices.

Adversarial Robustness regression

Fairness in Machine Learning

no code implementations31 Dec 2020 Luca Oneto, Silvia Chiappa

Machine learning based systems are reaching society at large and in many aspects of everyday life.

BIG-bench Machine Learning Fairness

Learning Fair and Transferable Representations

no code implementations NeurIPS 2020 Luca Oneto, Michele Donini, Andreas Maurer, Massimiliano Pontil

Developing learning methods which do not discriminate subgroups in the population is a central goal of algorithmic fairness.

Fairness

General Fair Empirical Risk Minimization

no code implementations29 Jan 2019 Luca Oneto, Michele Donini, Massimiliano Pontil

We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of sensitive information, in the general context of regression with possible continuous sensitive attributes.

Fairness regression

Taking Advantage of Multitask Learning for Fair Classification

no code implementations19 Oct 2018 Luca Oneto, Michele Donini, Amon Elders, Massimiliano Pontil

In this paper we show how it is possible to get the best of both worlds: optimize model accuracy and fairness without explicitly using the sensitive feature in the functional form of the model, thereby treating different individuals equally.

Classification Decision Making +2

Empirical Risk Minimization under Fairness Constraints

2 code implementations NeurIPS 2018 Michele Donini, Luca Oneto, Shai Ben-David, John Shawe-Taylor, Massimiliano Pontil

It encourages the conditional risk of the learned classifier to be approximately constant with respect to the sensitive variable.

Fairness

The Impact of Unlabeled Patterns in Rademacher Complexity Theory for Kernel Classifiers

no code implementations NeurIPS 2011 Luca Oneto, Davide Anguita, Alessandro Ghio, Sandro Ridella

We derive here new generalization bounds, based on Rademacher Complexity theory, for model selection and error estimation of linear (kernel) classifiers, which exploit the availability of unlabeled samples.

Generalization Bounds Model Selection

Cannot find the paper you are looking for? You can Submit a new open access paper.