Search Results for author: Baturalp Buyukates

Found 9 papers, 2 papers with code

Kick Bad Guys Out! Zero-Knowledge-Proof-Based Anomaly Detection in Federated Learning

no code implementations6 Oct 2023 Shanshan Han, Wenxuan Wu, Baturalp Buyukates, Weizhao Jin, Qifan Zhang, Yuhang Yao, Salman Avestimehr, Chaoyang He

Federated Learning (FL) systems are vulnerable to adversarial attacks, where malicious clients submit poisoned models to prevent the global model from converging or plant backdoors to induce the global model to misclassify some samples.

Anomaly Detection Federated Learning

Proof-of-Contribution-Based Design for Collaborative Machine Learning on Blockchain

no code implementations27 Feb 2023 Baturalp Buyukates, Chaoyang He, Shanshan Han, Zhiyong Fang, Yupeng Zhang, Jieyi Long, Ali Farahanchi, Salman Avestimehr

Our goal is to design a data marketplace for such decentralized collaborative/federated learning applications that simultaneously provides i) proof-of-contribution based reward allocation so that the trainers are compensated based on their contributions to the trained model; ii) privacy-preserving decentralized model training by avoiding any data movement from data owners; iii) robustness against malicious parties (e. g., trainers aiming to poison the model); iv) verifiability in the sense that the integrity, i. e., correctness, of all computations in the data market protocol including contribution assessment and outlier detection are verifiable through zero-knowledge proofs; and v) efficient and universal design.

Federated Learning Outlier Detection +1

Secure Federated Clustering

no code implementations31 May 2022 Songze Li, Sizai Hou, Baturalp Buyukates, Salman Avestimehr

We consider a foundational unsupervised learning task of $k$-means data clustering, in a federated learning (FL) setting consisting of a central server and many distributed clients.

Clustering Federated Learning

Timely Communication in Federated Learning

no code implementations31 Dec 2020 Baturalp Buyukates, Sennur Ulukus

Under the proposed scheme, at each iteration, the PS waits for $m$ available clients and sends them the current model.

Federated Learning

Gradient Coding with Dynamic Clustering for Straggler Mitigation

no code implementations3 Nov 2020 Baturalp Buyukates, Emre Ozfatura, Sennur Ulukus, Deniz Gunduz

In distributed synchronous gradient descent (GD) the main performance bottleneck for the per-iteration completion time is the slowest \textit{straggling} workers.

Clustering

Age-Based Coded Computation for Bias Reduction in Distributed Learning

no code implementations2 Jun 2020 Emre Ozfatura, Baturalp Buyukates, Deniz Gunduz, Sennur Ulukus

To mitigate biased estimators, we design a $timely$ dynamic encoding framework for partial recovery that includes an ordering operator that changes the codewords and computation orders at workers over time.

Cannot find the paper you are looking for? You can Submit a new open access paper.