You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 17 May 2022 • Mohammad Hossein Amani, Simone Bombari, Marco Mondelli, Rattana Pukdee, Stefano Rini

In this paper, we study the compression of a target two-layer neural network with N nodes into a compressed network with M<N nodes.

no code implementations • 18 Apr 2022 • Zhong-Jing Chen, Eduin E. Hernandez, Yu-Chih Huang, Stefano Rini

In this paper, we introduce $\mathsf{CO}_3$, an algorithm for communication-efficiency federated Deep Neural Network (DNN) training.$\mathsf{CO}_3$ takes its name from three processing applied steps which reduce the communication load when transmitting the local gradients from the remote users to the Parameter Server. Namely:(i) gradient quantization through floating-point conversion, (ii) lossless compression of the quantized gradient, and (iii) quantization error correction. We carefully design each of the steps above so as to minimize the loss in the distributed DNN training when the communication overhead is fixed. In particular, in the design of steps (i) and (ii), we adopt the assumption that DNN gradients are distributed according to a generalized normal distribution. This assumption is validated numerically in the paper.

no code implementations • 22 Mar 2022 • Farhad Mirkarimi, Stefano Rini

For the NMIE above, capacity estimation relies on two deep neural networks (DNN): (i) one DNN generates samples from a distribution that is learned, and (ii) a DNN to estimate the MI between the channel input and the channel output.

1 code implementation • 17 Mar 2022 • Zhong-Jing Chen, Eduin E. Hernandez, Yu-Chih Huang, Stefano Rini

In this paper, we introduce a novel algorithm, $\mathsf{CO}_3$, for communication-efficiency distributed Deep Neural Network (DNN) training.

no code implementations • 21 Feb 2022 • Mohammad Bokaei, Saeed Razavikia, Arash Amini, Stefano Rini

In this paper, we study the problem of estimating the direction of arrival (DOA) using a sparsely sampled uniform linear array (ULA).

no code implementations • 9 Feb 2022 • Samir M. Perlaza, Gaetan Bisson, Iñaki Esnaola, Alain Jean-Marie, Stefano Rini

The optimality and sensitivity of the empirical risk minimization problem with relative entropy regularization (ERM-RER) are investigated for the case in which the reference is a sigma-finite measure instead of a probability measure.

1 code implementation • 6 Feb 2022 • Sadaf Salehkalaibar, Stefano Rini

Under this assumption on the DNN gradient distribution, we propose a class of distortion measures to aid the design of quantizers for the compression of the model updates.

1 code implementation • 15 Nov 2021 • Zhong-Jing Chen, Eduin E. Hernandez, Yu-Chih Huang, Stefano Rini

In this paper we argue that, for some networks of practical interest, the gradient entries can be well modelled as having a generalized normal (GenNorm) distribution.

1 code implementation • 14 Nov 2021 • Farhad Mirkarimi, Stefano Rini, Nariman Farsad

Recently, several methods have been proposed for estimating the mutual information from sample data using deep neural networks and without the knowing closed form distribution of the data.

1 code implementation • 18 Oct 2021 • Eduin E. Hernandez, Stefano Rini, Tolga M. Duman

In order to correct for the inherent bias in this approximation, the algorithm retains in memory an accumulation of the outer products that are not used in the approximation.

no code implementations • 1 Jun 2021 • Amir Sonee, Stefano Rini, Yu-Chih Huang

This paper investigates the role of dimensionality reduction in efficient communication and differential privacy (DP) of the local datasets at the remote users for over-the-air computation (AirComp)-based federated learning (FL) model.

no code implementations • 7 Mar 2021 • Ruiyang Song, Stefano Rini, Kuang Xu

Causal bandit is a nascent learning model where an agent sequentially experiments in a causal network of variables, in order to identify the reward-maximizing intervention.

1 code implementation • 4 Mar 2021 • Busra Tegin, Eduin. E. Hernandez, Stefano Rini, Tolga M. Duman

Large-scale machine learning and data mining methods routinely distribute computations across multiple agents to parallelize processing.

Image Classification Distributed, Parallel, and Cluster Computing Information Theory Information Theory

no code implementations • 15 Feb 2021 • Vamsi K. Amalladinne, Allen Hao, Stefano Rini, Jean-Francois Chamberland

Unsourced random access (URA) is a recently proposed communication paradigm attuned to machine-driven data transfers.

Information Theory Information Theory

no code implementations • 22 Nov 2020 • Allen Hao, Stefano Rini, Vamsi Amalladinne, Asit Kumar Pradhan, Jean-Francois Chamberland

In the cluster with higher power, devices transmit using a two-layer superposition modulation.

Information Theory Information Theory

no code implementations • 15 Nov 2020 • Stefano Rini, Hirotsugu Hiramatsu

Time-resolved spectral techniques play an important analysis tool in many contexts, from physical chemistry to biomedicine.

no code implementations • 15 May 2020 • Amir Sonee, Stefano Rini

Accordingly, the objective of the clients is to minimize the training loss subject to (i) rate constraints for reliable communication over the MAC and (ii) DP constraint over the local datasets.

1 code implementation • 14 May 2020 • Ali Khajegili Mirabadi, Stefano Rini

The IR is a feature of a single image, while the MIR describes features common across two or more images. We begin by introducing the IR and the MIR and motivate these features in an information theoretical context as the ratio of the self-information of an intensity level over the information contained over the pixels of the same intensity.

no code implementations • 6 Mar 2020 • Emre Ozfatura, Stefano Rini, Deniz Gunduz

We study the performance of decentralized stochastic gradient descent (DSGD) in a wireless network, where the nodes collaboratively optimize an objective function using their local datasets.

no code implementations • 12 Jan 2020 • Mohammad-Amin Charusaie, Arash Amini, Stefano Rini

When considering discrete-domain moving-average processes with non-Gaussian excitation noise, the above results allow us to evaluate the block-average RID and DRB, as well as to determine a relationship between these parameters and other existing compressibility measures.

no code implementations • 29 Oct 2018 • Milind Rao, Stefano Rini, Andrea Goldsmith

In this paper, a distributed convex optimization algorithm, termed \emph{distributed coordinate dual averaging} (DCDA) algorithm, is proposed.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.