Search Results for author: Baturalp Buyukates

Found 11 papers, 2 papers with code

Maverick-Aware Shapley Valuation for Client Selection in Federated Learning

no code implementations21 May 2024 Mengwei Yang, Ismat Jarin, Baturalp Buyukates, Salman Avestimehr, Athina Markopoulou

In this paper, we first design a Maverick-aware Shapley valuation that fairly evaluates the contribution of Mavericks.

Federated Learning

Kick Bad Guys Out! Conditionally Activated Anomaly Detection in Federated Learning with Zero-Knowledge Proof Verification

no code implementations6 Oct 2023 Shanshan Han, Wenxuan Wu, Baturalp Buyukates, Weizhao Jin, Qifan Zhang, Yuhang Yao, Salman Avestimehr, Chaoyang He

Federated Learning (FL) systems are susceptible to adversarial attacks, where malicious clients submit poisoned models to disrupt the convergence or plant backdoors that cause the global model to misclassify some samples.

Anomaly Detection Federated Learning

FedSecurity: Benchmarking Attacks and Defenses in Federated Learning and Federated LLMs

1 code implementation8 Jun 2023 Shanshan Han, Baturalp Buyukates, Zijian Hu, Han Jin, Weizhao Jin, Lichao Sun, Xiaoyang Wang, Wenxuan Wu, Chulin Xie, Yuhang Yao, Kai Zhang, Qifan Zhang, Yuhui Zhang, Carlee Joe-Wong, Salman Avestimehr, Chaoyang He

This paper introduces FedSecurity, an end-to-end benchmark that serves as a supplementary component of the FedML library for simulating adversarial attacks and corresponding defense mechanisms in Federated Learning (FL).

Benchmarking Federated Learning

Proof-of-Contribution-Based Design for Collaborative Machine Learning on Blockchain

no code implementations27 Feb 2023 Baturalp Buyukates, Chaoyang He, Shanshan Han, Zhiyong Fang, Yupeng Zhang, Jieyi Long, Ali Farahanchi, Salman Avestimehr

Our goal is to design a data marketplace for such decentralized collaborative/federated learning applications that simultaneously provides i) proof-of-contribution based reward allocation so that the trainers are compensated based on their contributions to the trained model; ii) privacy-preserving decentralized model training by avoiding any data movement from data owners; iii) robustness against malicious parties (e. g., trainers aiming to poison the model); iv) verifiability in the sense that the integrity, i. e., correctness, of all computations in the data market protocol including contribution assessment and outlier detection are verifiable through zero-knowledge proofs; and v) efficient and universal design.

Federated Learning Outlier Detection +1

Secure Federated Clustering

no code implementations31 May 2022 Songze Li, Sizai Hou, Baturalp Buyukates, Salman Avestimehr

We consider a foundational unsupervised learning task of $k$-means data clustering, in a federated learning (FL) setting consisting of a central server and many distributed clients.

Clustering Federated Learning

Timely Communication in Federated Learning

no code implementations31 Dec 2020 Baturalp Buyukates, Sennur Ulukus

Under the proposed scheme, at each iteration, the PS waits for $m$ available clients and sends them the current model.

Federated Learning

Gradient Coding with Dynamic Clustering for Straggler Mitigation

no code implementations3 Nov 2020 Baturalp Buyukates, Emre Ozfatura, Sennur Ulukus, Deniz Gunduz

In distributed synchronous gradient descent (GD) the main performance bottleneck for the per-iteration completion time is the slowest \textit{straggling} workers.

Clustering

Age-Based Coded Computation for Bias Reduction in Distributed Learning

no code implementations2 Jun 2020 Emre Ozfatura, Baturalp Buyukates, Deniz Gunduz, Sennur Ulukus

To mitigate biased estimators, we design a $timely$ dynamic encoding framework for partial recovery that includes an ordering operator that changes the codewords and computation orders at workers over time.

Cannot find the paper you are looking for? You can Submit a new open access paper.