no code implementations • 8 Nov 2023 • Mohammad Bokaei, Saeed Razavikia, Stefano Rini, Arash Amini, Hamid Behrouzi
In this paper, we investigate the problem of recovering the frequency components of a mixture of $K$ complex sinusoids from a random subset of $N$ equally-spaced time-domain samples.
no code implementations • 23 Jan 2023 • Yangyi Liu, Stefano Rini, Sadaf Salehkalaibar, Jun Chen
This paper proposes ``\emph{${\bf M}$-magnitude weighted $L_{\bf 2}$ distortion + $\bf 2$ degrees of freedom''} (M22) algorithm, a rate-distortion inspired approach to gradient compression for federated training of deep neural networks (DNNs).
no code implementations • 12 Nov 2022 • Samir M. Perlaza, Gaetan Bisson, Iñaki Esnaola, Alain Jean-Marie, Stefano Rini
Among these properties, the solution to this problem, if it exists, is shown to be a unique probability measure, often mutually absolutely continuous with the reference measure.
no code implementations • 17 May 2022 • Mohammad Hossein Amani, Simone Bombari, Marco Mondelli, Rattana Pukdee, Stefano Rini
In this paper, we study the compression of a target two-layer neural network with N nodes into a compressed network with M<N nodes.
no code implementations • 18 Apr 2022 • Zhong-Jing Chen, Eduin E. Hernandez, Yu-Chih Huang, Stefano Rini
Namely: (i) gradient quantization through floating-point conversion, (ii) lossless compression of the quantized gradient, and (iii) quantization error correction.
no code implementations • 22 Mar 2022 • Farhad Mirkarimi, Stefano Rini, Nariman Farsad
These estimators ar referred to as neural mutual information estimation (NMIE)s. NMIEs differ from other approaches as they are data-driven estimators.
1 code implementation • 17 Mar 2022 • Zhong-Jing Chen, Eduin E. Hernandez, Yu-Chih Huang, Stefano Rini
In this paper, we introduce a novel algorithm, $\mathsf{CO}_3$, for communication-efficiency distributed Deep Neural Network (DNN) training.
no code implementations • 21 Feb 2022 • Mohammad Bokaei, Saeed Razavikia, Arash Amini, Stefano Rini
In this paper, we study the problem of estimating the direction of arrival (DOA) using a sparsely sampled uniform linear array (ULA).
no code implementations • 9 Feb 2022 • Samir M. Perlaza, Gaetan Bisson, Iñaki Esnaola, Alain Jean-Marie, Stefano Rini
The optimality and sensitivity of the empirical risk minimization problem with relative entropy regularization (ERM-RER) are investigated for the case in which the reference is a sigma-finite measure instead of a probability measure.
1 code implementation • 6 Feb 2022 • Sadaf Salehkalaibar, Stefano Rini
Under this assumption on the DNN gradient distribution, we propose a class of distortion measures to aid the design of quantizers for the compression of the model updates.
1 code implementation • 15 Nov 2021 • Zhong-Jing Chen, Eduin E. Hernandez, Yu-Chih Huang, Stefano Rini
In this paper we argue that, for some networks of practical interest, the gradient entries can be well modelled as having a generalized normal (GenNorm) distribution.
1 code implementation • 14 Nov 2021 • Farhad Mirkarimi, Stefano Rini, Nariman Farsad
Recently, several methods have been proposed for estimating the mutual information from sample data using deep neural networks and without the knowing closed form distribution of the data.
1 code implementation • 18 Oct 2021 • Eduin E. Hernandez, Stefano Rini, Tolga M. Duman
In order to correct for the inherent bias in this approximation, the algorithm retains in memory an accumulation of the outer products that are not used in the approximation.
no code implementations • 1 Jun 2021 • Amir Sonee, Stefano Rini, Yu-Chih Huang
This paper investigates the role of dimensionality reduction in efficient communication and differential privacy (DP) of the local datasets at the remote users for over-the-air computation (AirComp)-based federated learning (FL) model.
no code implementations • 7 Mar 2021 • Ruiyang Song, Stefano Rini, Kuang Xu
Causal bandit is a nascent learning model where an agent sequentially experiments in a causal network of variables, in order to identify the reward-maximizing intervention.
1 code implementation • 4 Mar 2021 • Busra Tegin, Eduin. E. Hernandez, Stefano Rini, Tolga M. Duman
Large-scale machine learning and data mining methods routinely distribute computations across multiple agents to parallelize processing.
Image Classification
Distributed, Parallel, and Cluster Computing
Information Theory
Information Theory
no code implementations • 15 Feb 2021 • Vamsi K. Amalladinne, Allen Hao, Stefano Rini, Jean-Francois Chamberland
Unsourced random access (URA) is a recently proposed communication paradigm attuned to machine-driven data transfers.
Information Theory Information Theory
no code implementations • 22 Nov 2020 • Allen Hao, Stefano Rini, Vamsi Amalladinne, Asit Kumar Pradhan, Jean-Francois Chamberland
In the cluster with higher power, devices transmit using a two-layer superposition modulation.
Information Theory Information Theory
no code implementations • 15 Nov 2020 • Stefano Rini, Hirotsugu Hiramatsu
Time-resolved spectral techniques play an important analysis tool in many contexts, from physical chemistry to biomedicine.
no code implementations • 15 May 2020 • Amir Sonee, Stefano Rini
Accordingly, the objective of the clients is to minimize the training loss subject to (i) rate constraints for reliable communication over the MAC and (ii) DP constraint over the local datasets.
1 code implementation • 14 May 2020 • Ali Khajegili Mirabadi, Stefano Rini
The IR is a feature of a single image, while the MIR describes features common across two or more images. We begin by introducing the IR and the MIR and motivate these features in an information theoretical context as the ratio of the self-information of an intensity level over the information contained over the pixels of the same intensity.
no code implementations • 6 Mar 2020 • Emre Ozfatura, Stefano Rini, Deniz Gunduz
We study the performance of decentralized stochastic gradient descent (DSGD) in a wireless network, where the nodes collaboratively optimize an objective function using their local datasets.
no code implementations • 12 Jan 2020 • Mohammad-Amin Charusaie, Arash Amini, Stefano Rini
When considering discrete-domain moving-average processes with non-Gaussian excitation noise, the above results allow us to evaluate the block-average RID and DRB, as well as to determine a relationship between these parameters and other existing compressibility measures.
no code implementations • 29 Oct 2018 • Milind Rao, Stefano Rini, Andrea Goldsmith
In this paper, a distributed convex optimization algorithm, termed \emph{distributed coordinate dual averaging} (DCDA) algorithm, is proposed.