Search Results for author: Dario Pasquini

Found 9 papers, 8 papers with code

Can Decentralized Learning be more robust than Federated Learning?

no code implementations7 Mar 2023 Mathilde Raynal, Dario Pasquini, Carmela Troncoso

Decentralized Learning (DL) is a peer--to--peer learning approach that allows a group of users to jointly train a machine learning model.

Federated Learning

Universal Neural-Cracking-Machines: Self-Configurable Password Models from Auxiliary Data

1 code implementation18 Jan 2023 Dario Pasquini, Giuseppe Ateniese, Carmela Troncoso

Specifically, the model uses deep learning to capture the correlation between the auxiliary data of a group of users (e. g., users of a web application) and their passwords.

On the (In)security of Peer-to-Peer Decentralized Machine Learning

1 code implementation17 May 2022 Dario Pasquini, Mathilde Raynal, Carmela Troncoso

In this work, we carry out the first, in-depth, privacy analysis of Decentralized Learning -- a collaborative machine learning framework aimed at addressing the main limitations of federated learning.

BIG-bench Machine Learning Federated Learning +1

Eluding Secure Aggregation in Federated Learning via Model Inconsistency

1 code implementation14 Nov 2021 Dario Pasquini, Danilo Francati, Giuseppe Ateniese

Indeed, the use of secure aggregation prevents the server from learning the value and the source of the individual model updates provided by the users, hampering inference and data attribution attacks.

Federated Learning

Unleashing the Tiger: Inference Attacks on Split Learning

3 code implementations4 Dec 2020 Dario Pasquini, Giuseppe Ateniese, Massimo Bernaschi

We investigate the security of Split Learning -- a novel collaborative machine learning framework that enables peak performance by requiring minimal resources consumption.

Federated Learning

Interpretable Probabilistic Password Strength Meters via Deep Learning

1 code implementation15 Apr 2020 Dario Pasquini, Giuseppe Ateniese, Massimo Bernaschi

Probabilistic password strength meters have been proved to be the most accurate tools to measure password strength.

Adversarial Out-domain Examples for Generative Models

1 code implementation7 Mar 2019 Dario Pasquini, Marco Mingione, Massimo Bernaschi

Deep generative models are rapidly becoming a common tool for researchers and developers.

Adversarial Attack Image Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.