no code implementations • 3 Nov 2022 • Emre Ozfatura, Yulin Shao, Amin Ghazanfari, Alberto Perotti, Branislav Popovic, Deniz Gunduz
Deep neural network (DNN)-assisted channel coding designs, such as low-complexity neural decoders for existing codes, or end-to-end neural-network-based auto-encoder designs are gaining interest recently due to their improved performance and flexibility; particularly for communication scenarios in which high-performing structured code designs do not exist.
no code implementations • 21 Aug 2022 • Kerem Ozfatura, Emre Ozfatura, Alptekin Kupcu, Deniz Gunduz
The centered clipping (CC) framework has further shown that, the momentum term from the previous iteration, besides reducing the variance, can be used as a reference point to neutralize Byzantine attacks better.
no code implementations • 19 Jun 2022 • Emre Ozfatura, Yulin Shao, Alberto Perotti, Branislav Popovic, Deniz Gunduz
Deep learning based channel code designs have recently gained interest as an alternative to conventional coding algorithms, particularly for channels for which existing codes do not provide effective solutions.
no code implementations • 30 May 2022 • Yulin Shao, Emre Ozfatura, Alberto Perotti, Branislav Popovic, Deniz Gunduz
The training methods can potentially be generalized to other wireless communication applications with machine learning.
no code implementations • 23 May 2022 • Michal Yemini, Rajarshi Saha, Emre Ozfatura, Deniz Gündüz, Andrea J. Goldsmith
We present a semi-decentralized federated learning algorithm wherein clients collaborate by relaying their neighbors' local updates to a central parameter server (PS).
no code implementations • 20 Mar 2022 • Francesc Wilhelmi, Jernej Hribar, Selim F. Yilmaz, Emre Ozfatura, Kerem Ozfatura, Ozlem Yildiz, Deniz Gündüz, Hao Chen, Xiaoying Ye, Lizhao You, Yulin Shao, Paolo Dini, Boris Bellalta
As wireless standards evolve, more complex functionalities are introduced to address the increasing requirements in terms of throughput, latency, security, and efficiency.
no code implementations • 24 Feb 2022 • Michal Yemini, Rajarshi Saha, Emre Ozfatura, Deniz Gündüz, Andrea J. Goldsmith
Intermittent connectivity of clients to the parameter server (PS) is a major bottleneck in federated edge learning frameworks.
no code implementations • 7 Dec 2021 • Emre Ozfatura, Deniz Gunduz, H. Vincent Poor
This is partly due to the communication bottleneck limiting the overall computation speed.
no code implementations • ICML Workshop AML 2021 • Emre Ozfatura, Muhammad Zaid Hameed, Kerem Ozfatura, Deniz Gunduz
Hence, we propose a novel approach to identify the important features by employing counter-adversarial attacks, which highlights the consistency at the penultimate layer with respect to perturbations on input samples.
no code implementations • 1 Mar 2021 • Baturalp Buyukates, Emre Ozfatura, Sennur Ulukus, Deniz Gunduz
Distributed implementations are crucial in speeding up large scale machine learning applications.
no code implementations • 21 Jan 2021 • Emre Ozfatura, Kerem Ozfatura, Deniz Gunduz
Sparse communication is often employed to reduce the communication load, where only a small subset of the model updates are communicated from the clients to the PS.
no code implementations • 16 Dec 2020 • Kerem Ozfatura, Emre Ozfatura, Deniz Gunduz
The core of the FL strategy is the use of stochastic gradient descent (SGD) in a distributed manner.
no code implementations • 12 Nov 2020 • Kerem Ozfatura, Emre Ozfatura, Deniz Gunduz
However, top-K sparsification requires additional communication load to represent the sparsity pattern, and the mismatch between the sparsity patterns of the workers prevents exploitation of efficient communication protocols.
no code implementations • 3 Nov 2020 • Baturalp Buyukates, Emre Ozfatura, Sennur Ulukus, Deniz Gunduz
In distributed synchronous gradient descent (GD) the main performance bottleneck for the per-iteration completion time is the slowest \textit{straggling} workers.
no code implementations • 28 Sep 2020 • Deniz Gunduz, David Burth Kurka, Mikolaj Jankowski, Mohammad Mohammadi Amiri, Emre Ozfatura, Sreejith Sreekumar
Bringing the success of modern machine learning (ML) techniques to mobile devices can enable many new services and businesses, but also poses significant technical and research challenges.
no code implementations • 4 Jul 2020 • Emre Ozfatura, Sennur Ulukus, Deniz Gunduz
In this paper, we first introduce a novel coded matrix-vector multiplication scheme, called coded computation with partial recovery (CCPR), which benefits from the advantages of both coded and uncoded computation schemes, and reduces both the computation time and the decoding complexity by allowing a trade-off between the accuracy and the speed of computation.
no code implementations • 2 Jun 2020 • Emre Ozfatura, Baturalp Buyukates, Deniz Gunduz, Sennur Ulukus
To mitigate biased estimators, we design a $timely$ dynamic encoding framework for partial recovery that includes an ordering operator that changes the codewords and computation orders at workers over time.
no code implementations • 10 Apr 2020 • Emre Ozfatura, Sennur Ulukus, Deniz Gunduz
When gradient descent (GD) is scaled to many parallel workers for large scale machine learning problems, its per-iteration computation time is limited by the straggling workers.
no code implementations • 6 Mar 2020 • Emre Ozfatura, Stefano Rini, Deniz Gunduz
We study the performance of decentralized stochastic gradient descent (DSGD) in a wireless network, where the nodes collaboratively optimize an objective function using their local datasets.
no code implementations • 5 Sep 2019 • Mehdi Salehi Heydar Abad, Emre Ozfatura, Deniz Gunduz, Ozgur Ercetin
We study collaborative machine learning (ML) across wireless devices, each with its own local dataset.
no code implementations • 5 Mar 2019 • Emre Ozfatura, Deniz Gunduz, Sennur Ulukus
Gradient descent (GD) methods are commonly employed in machine learning problems to optimize the parameters of the model in an iterative fashion.
no code implementations • 22 Nov 2018 • Emre Ozfatura, Sennur Ulukus, Deniz Gunduz
Coded computation techniques provide robustness against straggling servers in distributed computing, with the following limitations: First, they increase decoding complexity.
no code implementations • 7 Aug 2018 • Emre Ozfatura, Deniz Gunduz, Sennur Ulukus
In most of the existing DGD schemes, either with coded computation or coded communication, the non-straggling CSs transmit one message per iteration once they complete all their assigned computation tasks.