Search Results for author: Emre Ozfatura

Found 25 papers, 0 papers with code

Aggressive or Imperceptible, or Both: Network Pruning Assisted Hybrid Byzantines in Federated Learning

no code implementations9 Apr 2024 Emre Ozfatura, Kerem Ozfatura, Alptekin Kupcu, Deniz Gunduz

Hence, inspired by the sparse neural networks, we introduce a hybrid sparse Byzantine attack that is composed of two parts: one exhibiting a sparse nature and attacking only certain NN locations with higher sensitivity, and the other being more silent but accumulating over time, where each ideally targets a different type of defence mechanism, and together they form a strong but imperceptible attack.

Federated Learning Network Pruning +1

Process-and-Forward: Deep Joint Source-Channel Coding Over Cooperative Relay Networks

no code implementations15 Mar 2024 Chenghong Bian, Yulin Shao, Haotian Wu, Emre Ozfatura, Deniz Gunduz

In the proposed scheme, the source transmits information in blocks, and the relay updates its knowledge about the input signal after each block and generates its own signal to be conveyed to the destination.

Image Compression

Feedback is Good, Active Feedback is Better: Block Attention Active Feedback Codes

no code implementations3 Nov 2022 Emre Ozfatura, Yulin Shao, Amin Ghazanfari, Alberto Perotti, Branislav Popovic, Deniz Gunduz

Deep neural network (DNN)-assisted channel coding designs, such as low-complexity neural decoders for existing codes, or end-to-end neural-network-based auto-encoder designs are gaining interest recently due to their improved performance and flexibility; particularly for communication scenarios in which high-performing structured code designs do not exist.

Byzantines can also Learn from History: Fall of Centered Clipping in Federated Learning

no code implementations21 Aug 2022 Kerem Ozfatura, Emre Ozfatura, Alptekin Kupcu, Deniz Gunduz

The centered clipping (CC) framework has further shown that the momentum term from the previous iteration, besides reducing the variance, can be used as a reference point to neutralize Byzantine attacks better.

Federated Learning Image Classification

All you need is feedback: Communication with block attention feedback codes

no code implementations19 Jun 2022 Emre Ozfatura, Yulin Shao, Alberto Perotti, Branislav Popovic, Deniz Gunduz

Deep learning based channel code designs have recently gained interest as an alternative to conventional coding algorithms, particularly for channels for which existing codes do not provide effective solutions.

AttentionCode: Ultra-Reliable Feedback Codes for Short-Packet Communications

no code implementations30 May 2022 Yulin Shao, Emre Ozfatura, Alberto Perotti, Branislav Popovic, Deniz Gunduz

The training methods can potentially be generalized to other wireless communication applications with machine learning.

Semi-Decentralized Federated Learning with Collaborative Relaying

no code implementations23 May 2022 Michal Yemini, Rajarshi Saha, Emre Ozfatura, Deniz Gündüz, Andrea J. Goldsmith

We present a semi-decentralized federated learning algorithm wherein clients collaborate by relaying their neighbors' local updates to a central parameter server (PS).

Federated Learning

Federated Spatial Reuse Optimization in Next-Generation Decentralized IEEE 802.11 WLANs

no code implementations20 Mar 2022 Francesc Wilhelmi, Jernej Hribar, Selim F. Yilmaz, Emre Ozfatura, Kerem Ozfatura, Ozlem Yildiz, Deniz Gündüz, Hao Chen, Xiaoying Ye, Lizhao You, Yulin Shao, Paolo Dini, Boris Bellalta

As wireless standards evolve, more complex functionalities are introduced to address the increasing requirements in terms of throughput, latency, security, and efficiency.

Federated Learning

Less is More: Feature Selection for Adversarial Robustness with Compressive Counter-Adversarial Attacks

no code implementations ICML Workshop AML 2021 Emre Ozfatura, Muhammad Zaid Hameed, Kerem Ozfatura, Deniz Gunduz

Hence, we propose a novel approach to identify the important features by employing counter-adversarial attacks, which highlights the consistency at the penultimate layer with respect to perturbations on input samples.

Adversarial Robustness feature selection

Time-Correlated Sparsification for Communication-Efficient Federated Learning

no code implementations21 Jan 2021 Emre Ozfatura, Kerem Ozfatura, Deniz Gunduz

Sparse communication is often employed to reduce the communication load, where only a small subset of the model updates are communicated from the clients to the PS.

Federated Learning Quantization

FedADC: Accelerated Federated Learning with Drift Control

no code implementations16 Dec 2020 Kerem Ozfatura, Emre Ozfatura, Deniz Gunduz

The core of the FL strategy is the use of stochastic gradient descent (SGD) in a distributed manner.

Federated Learning

Distributed Sparse SGD with Majority Voting

no code implementations12 Nov 2020 Kerem Ozfatura, Emre Ozfatura, Deniz Gunduz

However, top-K sparsification requires additional communication load to represent the sparsity pattern, and the mismatch between the sparsity patterns of the workers prevents exploitation of efficient communication protocols.

Gradient Coding with Dynamic Clustering for Straggler Mitigation

no code implementations3 Nov 2020 Baturalp Buyukates, Emre Ozfatura, Sennur Ulukus, Deniz Gunduz

In distributed synchronous gradient descent (GD) the main performance bottleneck for the per-iteration completion time is the slowest \textit{straggling} workers.

Clustering

Communicate to Learn at the Edge

no code implementations28 Sep 2020 Deniz Gunduz, David Burth Kurka, Mikolaj Jankowski, Mohammad Mohammadi Amiri, Emre Ozfatura, Sreejith Sreekumar

Bringing the success of modern machine learning (ML) techniques to mobile devices can enable many new services and businesses, but also poses significant technical and research challenges.

Coded Distributed Computing with Partial Recovery

no code implementations4 Jul 2020 Emre Ozfatura, Sennur Ulukus, Deniz Gunduz

In this paper, we first introduce a novel coded matrix-vector multiplication scheme, called coded computation with partial recovery (CCPR), which benefits from the advantages of both coded and uncoded computation schemes, and reduces both the computation time and the decoding complexity by allowing a trade-off between the accuracy and the speed of computation.

Distributed Computing

Age-Based Coded Computation for Bias Reduction in Distributed Learning

no code implementations2 Jun 2020 Emre Ozfatura, Baturalp Buyukates, Deniz Gunduz, Sennur Ulukus

To mitigate biased estimators, we design a $timely$ dynamic encoding framework for partial recovery that includes an ordering operator that changes the codewords and computation orders at workers over time.

Straggler-aware Distributed Learning: Communication Computation Latency Trade-off

no code implementations10 Apr 2020 Emre Ozfatura, Sennur Ulukus, Deniz Gunduz

When gradient descent (GD) is scaled to many parallel workers for large scale machine learning problems, its per-iteration computation time is limited by the straggling workers.

Decentralized SGD with Over-the-Air Computation

no code implementations6 Mar 2020 Emre Ozfatura, Stefano Rini, Deniz Gunduz

We study the performance of decentralized stochastic gradient descent (DSGD) in a wireless network, where the nodes collaboratively optimize an objective function using their local datasets.

Image Classification Scheduling

Hierarchical Federated Learning Across Heterogeneous Cellular Networks

no code implementations5 Sep 2019 Mehdi Salehi Heydar Abad, Emre Ozfatura, Deniz Gunduz, Ozgur Ercetin

We study collaborative machine learning (ML) across wireless devices, each with its own local dataset.

Federated Learning

Gradient Coding with Clustering and Multi-message Communication

no code implementations5 Mar 2019 Emre Ozfatura, Deniz Gunduz, Sennur Ulukus

Gradient descent (GD) methods are commonly employed in machine learning problems to optimize the parameters of the model in an iterative fashion.

Clustering Distributed Computing

Distributed Gradient Descent with Coded Partial Gradient Computations

no code implementations22 Nov 2018 Emre Ozfatura, Sennur Ulukus, Deniz Gunduz

Coded computation techniques provide robustness against straggling servers in distributed computing, with the following limitations: First, they increase decoding complexity.

Distributed Computing

Speeding Up Distributed Gradient Descent by Utilizing Non-persistent Stragglers

no code implementations7 Aug 2018 Emre Ozfatura, Deniz Gunduz, Sennur Ulukus

In most of the existing DGD schemes, either with coded computation or coded communication, the non-straggling CSs transmit one message per iteration once they complete all their assigned computation tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.