Search Results for author: Héber H. Arcolezi

Found 6 papers, 4 papers with code

Causal Discovery Under Local Privacy

no code implementations7 Nov 2023 Rūta Binkytė, Carlos Pinzón, Szilvia Lestyán, Kangsoo Jung, Héber H. Arcolezi, Catuscia Palamidessi

It is based on the application of controlled noise at the interface between the server that stores and processes the data, and the data consumers.

Causal Discovery

On the Utility Gain of Iterative Bayesian Update for Locally Differentially Private Mechanisms

1 code implementation15 Jul 2023 Héber H. Arcolezi, Selene Cerna, Catuscia Palamidessi

This paper investigates the utility gain of using Iterative Bayesian Update (IBU) for private discrete distribution estimation using data obfuscated with Locally Differentially Private (LDP) mechanisms.

Privacy Preserving

(Local) Differential Privacy has NO Disparate Impact on Fairness

1 code implementation25 Apr 2023 Héber H. Arcolezi, Karima Makhlouf, Catuscia Palamidessi

However, as the collection of multiple sensitive information becomes more prevalent across various industries, collecting a single sensitive attribute under LDP may not be sufficient.

Attribute Fairness +1

Differentially Private Multivariate Time Series Forecasting of Aggregated Human Mobility With Deep Learning: Input or Gradient Perturbation?

1 code implementation1 May 2022 Héber H. Arcolezi, Jean-François Couchot, Denis Renaud, Bechara Al Bouna, Xiaokui Xiao

As shown in the results, differentially private deep learning models trained under gradient or input perturbation achieve nearly the same performance as non-private deep learning models, with loss in performance varying between $0. 57\%$ to $2. 8\%$.

Decision Making Multivariate Time Series Forecasting +1

Production of Categorical Data Verifying Differential Privacy: Conception and Applications to Machine Learning

1 code implementation2 Apr 2022 Héber H. Arcolezi

The objective of this thesis is thus two-fold: O$_1$) To improve the utility and privacy in multiple frequency estimates under LDP guarantees, which is fundamental to statistical learning.

BIG-bench Machine Learning Privacy Preserving

Cannot find the paper you are looking for? You can Submit a new open access paper.