1 code implementation • 20 Aug 2021 • Ege Erdogan, Alptekin Kupcu, A. Ercument Cicek
(1) We show that an honest-but-curious split learning server, equipped only with the knowledge of the client neural network architecture, can recover the input samples and obtain a functionally similar model to the client model, without being detected.
1 code implementation • 20 Aug 2021 • Ege Erdogan, Alptekin Kupcu, A. Ercument Cicek
Distributed deep learning frameworks such as split learning provide great benefits with regards to the computational cost of training deep neural networks and the privacy-aware utilization of the collective data of a group of data-holders.
no code implementations • 21 Aug 2022 • Kerem Ozfatura, Emre Ozfatura, Alptekin Kupcu, Deniz Gunduz
The centered clipping (CC) framework has further shown that the momentum term from the previous iteration, besides reducing the variance, can be used as a reference point to neutralize Byzantine attacks better.
1 code implementation • 16 Feb 2023 • Ege Erdogan, Unat Teksen, Mehmet Salih Celiktenyildiz, Alptekin Kupcu, A. Ercument Cicek
Split learning enables efficient and privacy-aware training of a deep neural network by splitting a neural network so that the clients (data holders) compute the first layers and only share the intermediate output with the central compute-heavy server.
no code implementations • 9 Apr 2024 • Emre Ozfatura, Kerem Ozfatura, Alptekin Kupcu, Deniz Gunduz
Hence, inspired by the sparse neural networks, we introduce a hybrid sparse Byzantine attack that is composed of two parts: one exhibiting a sparse nature and attacking only certain NN locations with higher sensitivity, and the other being more silent but accumulating over time, where each ideally targets a different type of defence mechanism, and together they form a strong but imperceptible attack.