Search Results for author: Alptekin Kupcu

Found 5 papers, 3 papers with code

UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning

1 code implementation20 Aug 2021 Ege Erdogan, Alptekin Kupcu, A. Ercument Cicek

(1) We show that an honest-but-curious split learning server, equipped only with the knowledge of the client neural network architecture, can recover the input samples and obtain a functionally similar model to the client model, without being detected.

SplitGuard: Detecting and Mitigating Training-Hijacking Attacks in Split Learning

1 code implementation20 Aug 2021 Ege Erdogan, Alptekin Kupcu, A. Ercument Cicek

Distributed deep learning frameworks such as split learning provide great benefits with regards to the computational cost of training deep neural networks and the privacy-aware utilization of the collective data of a group of data-holders.

Byzantines can also Learn from History: Fall of Centered Clipping in Federated Learning

no code implementations21 Aug 2022 Kerem Ozfatura, Emre Ozfatura, Alptekin Kupcu, Deniz Gunduz

The centered clipping (CC) framework has further shown that the momentum term from the previous iteration, besides reducing the variance, can be used as a reference point to neutralize Byzantine attacks better.

Federated Learning Image Classification

SplitOut: Out-of-the-Box Training-Hijacking Detection in Split Learning via Outlier Detection

1 code implementation16 Feb 2023 Ege Erdogan, Unat Teksen, Mehmet Salih Celiktenyildiz, Alptekin Kupcu, A. Ercument Cicek

Split learning enables efficient and privacy-aware training of a deep neural network by splitting a neural network so that the clients (data holders) compute the first layers and only share the intermediate output with the central compute-heavy server.

Outlier Detection

Aggressive or Imperceptible, or Both: Network Pruning Assisted Hybrid Byzantines in Federated Learning

no code implementations9 Apr 2024 Emre Ozfatura, Kerem Ozfatura, Alptekin Kupcu, Deniz Gunduz

Hence, inspired by the sparse neural networks, we introduce a hybrid sparse Byzantine attack that is composed of two parts: one exhibiting a sparse nature and attacking only certain NN locations with higher sensitivity, and the other being more silent but accumulating over time, where each ideally targets a different type of defence mechanism, and together they form a strong but imperceptible attack.

Federated Learning Network Pruning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.