Search Results for author: Daniele Romanini

Found 2 papers, 2 papers with code

Practical Defences Against Model Inversion Attacks for Split Neural Networks

1 code implementation12 Apr 2021 Tom Titcombe, Adam J. Hall, Pavlos Papadopoulos, Daniele Romanini

We describe a threat model under which a split network-based federated learning system is susceptible to a model inversion attack by a malicious computational server.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.