Search Results for author: Catuscia Palamidessi

Found 23 papers, 10 papers with code

Causal Discovery Under Local Privacy

no code implementations7 Nov 2023 Rūta Binkytė, Carlos Pinzón, Szilvia Lestyán, Kangsoo Jung, Héber H. Arcolezi, Catuscia Palamidessi

It is based on the application of controlled noise at the interface between the server that stores and processes the data, and the data consumers.

Causal Discovery

Online Sensitivity Optimization in Differentially Private Learning

no code implementations2 Oct 2023 Filippo Galli, Catuscia Palamidessi, Tommaso Cucinotta

Training differentially private machine learning models requires constraining an individual's contribution to the optimization process.

Advancing Personalized Federated Learning: Group Privacy, Fairness, and Beyond

no code implementations1 Sep 2023 Filippo Galli, Kangsoo Jung, Sayan Biswas, Catuscia Palamidessi, Tommaso Cucinotta

FL was proposed as a stepping-stone towards privacy-preserving machine learning, but it has been shown vulnerable to issues such as leakage of private information, lack of personalization of the model, and the possibility of having a trained model that is fairer to some groups than to others.

Fairness Personalized Federated Learning +1

On the Utility Gain of Iterative Bayesian Update for Locally Differentially Private Mechanisms

1 code implementation15 Jul 2023 Héber H. Arcolezi, Selene Cerna, Catuscia Palamidessi

This paper investigates the utility gain of using Iterative Bayesian Update (IBU) for private discrete distribution estimation using data obfuscated with Locally Differentially Private (LDP) mechanisms.

Privacy Preserving

BaBE: Enhancing Fairness via Estimation of Latent Explaining Variables

1 code implementation6 Jul 2023 Ruta Binkyte, Daniele Gorla, Catuscia Palamidessi

We consider the problem of unfair discrimination between two groups and propose a pre-processing method to achieve fairness.

Attribute Fairness

(Local) Differential Privacy has NO Disparate Impact on Fairness

1 code implementation25 Apr 2023 Héber H. Arcolezi, Karima Makhlouf, Catuscia Palamidessi

However, as the collection of multiple sensitive information becomes more prevalent across various industries, collecting a single sensitive attribute under LDP may not be sufficient.

Attribute Fairness +1

Survey on Fairness Notions and Related Tensions

no code implementations16 Sep 2022 Guilherme Alves, Fabien Bernier, Miguel Couceiro, Karima Makhlouf, Catuscia Palamidessi, Sami Zhioua

Fairness requirements to be satisfied while learning models created several types of tensions among the different notions of fairness and other desirable properties such as privacy and classification accuracy.

Fairness

Causal Discovery for Fairness

no code implementations14 Jun 2022 Rūta Binkytė-Sadauskienė, Karima Makhlouf, Carlos Pinzón, Sami Zhioua, Catuscia Palamidessi

Existing causal approaches to fairness in the literature do not address this problem and assume that the causal model is available.

Attribute Causal Discovery +1

Group privacy for personalized federated learning

no code implementations7 Jun 2022 Filippo Galli, Sayan Biswas, Kangsoo Jung, Tommaso Cucinotta, Catuscia Palamidessi

To cope with the issue of protecting the privacy of the clients and allowing for personalized model training to enhance the fairness and utility of the system, we propose a method to provide group privacy guarantees exploiting some key properties of $d$-privacy which enables personalized models under the framework of FL.

Fairness Personalized Federated Learning

Leveraging Adversarial Examples to Quantify Membership Information Leakage

1 code implementation CVPR 2022 Ganesh Del Grosso, Hamid Jalalzai, Georg Pichler, Catuscia Palamidessi, Pablo Piantanida

The use of personal data for training machine learning systems comes with a privacy threat and measuring the level of privacy of a model is one of the major challenges in machine learning today.

BIG-bench Machine Learning

Identifiability of Causal-based Fairness Notions: A State of the Art

no code implementations11 Mar 2022 Karima Makhlouf, Sami Zhioua, Catuscia Palamidessi

This paper is a compilation of the major identifiability results which are of particular relevance for machine learning fairness.

BIG-bench Machine Learning Causal Inference +2

On the impossibility of non-trivial accuracy under fairness constraints

no code implementations14 Jul 2021 Carlos Pinzón, Catuscia Palamidessi, Pablo Piantanida, Frank Valencia

One of the main concerns about fairness in machine learning (ML) is that, in order to achieve it, one may have to trade off some accuracy.

Fairness

DOCTOR: A Simple Method for Detecting Misclassification Errors

1 code implementation NeurIPS 2021 Federica Granese, Marco Romanelli, Daniele Gorla, Catuscia Palamidessi, Pablo Piantanida

Deep neural networks (DNNs) have shown to perform very well on large scale object recognition problems and lead to widespread use for real-world applications, including situations where DNN are implemented as "black boxes".

Object Recognition Sentiment Analysis

Bounding Information Leakage in Machine Learning

no code implementations9 May 2021 Ganesh Del Grosso, Georg Pichler, Catuscia Palamidessi, Pablo Piantanida

We present a novel formalism, generalizing membership and attribute inference attack setups previously studied in the literature and connecting them to memorization and generalization.

Attribute BIG-bench Machine Learning +3

Information Leakage Games: Exploring Information as a Utility Function

no code implementations22 Dec 2020 Mário S. Alvim, Konstantinos Chatzikokolakis, Yusuke Kawamoto, Catuscia Palamidessi

A common goal in the areas of secure information flow and privacy is to build effective defenses against unwanted leakage of information.

Survey on Causal-based Machine Learning Fairness Notions

no code implementations19 Oct 2020 Karima Makhlouf, Sami Zhioua, Catuscia Palamidessi

Addressing the problem of fairness is crucial to safely use machine learning algorithms to support decisions with a critical impact on people's lives such as job hiring, child maltreatment, disease diagnosis, loan granting, etc.

BIG-bench Machine Learning Fairness

Machine learning fairness notions: Bridging the gap with real-world applications

no code implementations30 Jun 2020 Karima Makhlouf, Sami Zhioua, Catuscia Palamidessi

Fairness emerged as an important requirement to guarantee that Machine Learning (ML) predictive systems do not discriminate against specific individuals or entire sub-populations, in particular, minorities.

BIG-bench Machine Learning Fairness +1

Estimating g-Leakage via Machine Learning

1 code implementation9 May 2020 Marco Romanelli, Konstantinos Chatzikokolakis, Catuscia Palamidessi, Pablo Piantanida

A feature of our approach is that it does not require to estimate the conditional probabilities, and that it is suitable for a large class of ML algorithms.

BIG-bench Machine Learning

Feature selection in machine learning: Rényi min-entropy vs Shannon entropy

no code implementations27 Jan 2020 Catuscia Palamidessi, Marco Romanelli

Many algorithms for feature selection in the literature have adopted the Shannon-entropy-based mutual information.

BIG-bench Machine Learning feature selection

Optimal Obfuscation Mechanisms via Machine Learning

1 code implementation1 Apr 2019 Marco Romanelli, Konstantinos Chatzikokolakis, Catuscia Palamidessi

The idea is to set up two nets: the generator, that tries to produce an optimal obfuscation mechanism to protect the data, and the classifier, that tries to de-obfuscate the data.

BIG-bench Machine Learning

F-BLEAU: Fast Black-box Leakage Estimation

1 code implementation4 Feb 2019 Giovanni Cherubin, Konstantinos Chatzikokolakis, Catuscia Palamidessi

The state-of-the-art method for estimating these leakage measures is the frequentist paradigm, which approximates the system's internals by looking at the frequencies of its inputs and outputs.

Cryptography and Security

Geo-Indistinguishability: Differential Privacy for Location-Based Systems

2 code implementations10 Dec 2012 Miguel E. Andrés, Nicolás E. Bordenabe, Konstantinos Chatzikokolakis, Catuscia Palamidessi

The growing popularity of location-based systems, allowing unknown/untrusted servers to easily collect huge amounts of information regarding users' location, has recently started raising serious privacy concerns.

Cryptography and Security C.2.0; K.4.1

Cannot find the paper you are looking for? You can Submit a new open access paper.