Search Results for author: Giuseppe Ateniese

Found 8 papers, 6 papers with code

Eluding Secure Aggregation in Federated Learning via Model Inconsistency

1 code implementation14 Nov 2021 Dario Pasquini, Danilo Francati, Giuseppe Ateniese

Indeed, the use of secure aggregation prevents the server from learning the value and the source of the individual model updates provided by the users, hampering inference and data attribution attacks.

Federated Learning

Unleashing the Tiger: Inference Attacks on Split Learning

3 code implementations4 Dec 2020 Dario Pasquini, Giuseppe Ateniese, Massimo Bernaschi

We investigate the security of Split Learning -- a novel collaborative machine learning framework that enables peak performance by requiring minimal resources consumption.

Federated Learning

Interpretable Probabilistic Password Strength Meters via Deep Learning

1 code implementation15 Apr 2020 Dario Pasquini, Giuseppe Ateniese, Massimo Bernaschi

Probabilistic password strength meters have been proved to be the most accurate tools to measure password strength.

WHAT ARE GANS USEFUL FOR?

no code implementations ICLR 2018 Pablo M. Olmos, Briland Hitaj, Paolo Gasti, Giuseppe Ateniese, Fernando Perez-Cruz

In this paper, we noticed that even though GANs might not be able to generate samples from the underlying distribution (or we cannot tell at least), they are capturing some structure of the data in that high dimensional space.

Density Estimation

PassGAN: A Deep Learning Approach for Password Guessing

3 code implementations1 Sep 2017 Briland Hitaj, Paolo Gasti, Giuseppe Ateniese, Fernando Perez-Cruz

State-of-the-art password guessing tools, such as HashCat and John the Ripper, enable users to check billions of passwords per second against password hashes.

BIG-bench Machine Learning

Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning

1 code implementation24 Feb 2017 Briland Hitaj, Giuseppe Ateniese, Fernando Perez-Cruz

Unfortunately, we show that any privacy-preserving collaborative deep learning is susceptible to a powerful attack that we devise in this paper.

Privacy Preserving

Cannot find the paper you are looking for? You can Submit a new open access paper.