Search Results for author: Alptekin Kupcu

Found 4 papers, 3 papers with code

Defense Mechanisms Against Training-Hijacking Attacks in Split Learning

1 code implementation16 Feb 2023 Ege Erdogan, Unat Teksen, Mehmet Salih Celiktenyildiz, Alptekin Kupcu, A. Ercument Cicek

Split learning achieves this by splitting a neural network between a client and a server such that the client computes the initial set of layers, and the server computes the rest.

Byzantines can also Learn from History: Fall of Centered Clipping in Federated Learning

no code implementations21 Aug 2022 Kerem Ozfatura, Emre Ozfatura, Alptekin Kupcu, Deniz Gunduz

The centered clipping (CC) framework has further shown that, the momentum term from the previous iteration, besides reducing the variance, can be used as a reference point to neutralize Byzantine attacks better.

Federated Learning Image Classification

UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning

1 code implementation20 Aug 2021 Ege Erdogan, Alptekin Kupcu, A. Ercument Cicek

(1) We show that an honest-but-curious split learning server, equipped only with the knowledge of the client neural network architecture, can recover the input samples and obtain a functionally similar model to the client model, without being detected.

SplitGuard: Detecting and Mitigating Training-Hijacking Attacks in Split Learning

1 code implementation20 Aug 2021 Ege Erdogan, Alptekin Kupcu, A. Ercument Cicek

Distributed deep learning frameworks such as split learning provide great benefits with regards to the computational cost of training deep neural networks and the privacy-aware utilization of the collective data of a group of data-holders.

Cannot find the paper you are looking for? You can Submit a new open access paper.