Search Results for author: Stefano Rini

Found 24 papers, 7 papers with code

Harmonic Retrieval Using Weighted Lifted-Structure Low-Rank Matrix Completion

no code implementations8 Nov 2023 Mohammad Bokaei, Saeed Razavikia, Stefano Rini, Arash Amini, Hamid Behrouzi

In this paper, we investigate the problem of recovering the frequency components of a mixture of $K$ complex sinusoids from a random subset of $N$ equally-spaced time-domain samples.

Low-Rank Matrix Completion Retrieval

M22: A Communication-Efficient Algorithm for Federated Learning Inspired by Rate-Distortion

no code implementations23 Jan 2023 Yangyi Liu, Stefano Rini, Sadaf Salehkalaibar, Jun Chen

This paper proposes ``\emph{${\bf M}$-magnitude weighted $L_{\bf 2}$ distortion + $\bf 2$ degrees of freedom''} (M22) algorithm, a rate-distortion inspired approach to gradient compression for federated training of deep neural networks (DNNs).

Federated Learning

Empirical Risk Minimization with Relative Entropy Regularization

no code implementations12 Nov 2022 Samir M. Perlaza, Gaetan Bisson, Iñaki Esnaola, Alain Jean-Marie, Stefano Rini

Among these properties, the solution to this problem, if it exists, is shown to be a unique probability measure, often mutually absolutely continuous with the reference measure.

Sharp asymptotics on the compression of two-layer neural networks

no code implementations17 May 2022 Mohammad Hossein Amani, Simone Bombari, Marco Mondelli, Rattana Pukdee, Stefano Rini

In this paper, we study the compression of a target two-layer neural network with N nodes into a compressed network with M<N nodes.

Vocal Bursts Valence Prediction

How to Attain Communication-Efficient DNN Training? Convert, Compress, Correct

no code implementations18 Apr 2022 Zhong-Jing Chen, Eduin E. Hernandez, Yu-Chih Huang, Stefano Rini

Namely: (i) gradient quantization through floating-point conversion, (ii) lossless compression of the quantized gradient, and (iii) quantization error correction.


A Perspective on Neural Capacity Estimation: Viability and Reliability

no code implementations22 Mar 2022 Farhad Mirkarimi, Stefano Rini, Nariman Farsad

These estimators ar referred to as neural mutual information estimation (NMIE)s. NMIEs differ from other approaches as they are data-driven estimators.

Benchmarking Capacity Estimation +1

Convert, compress, correct: Three steps toward communication-efficient DNN training

1 code implementation17 Mar 2022 Zhong-Jing Chen, Eduin E. Hernandez, Yu-Chih Huang, Stefano Rini

In this paper, we introduce a novel algorithm, $\mathsf{CO}_3$, for communication-efficiency distributed Deep Neural Network (DNN) training.


Two-snapshot DOA Estimation via Hankel-structured Matrix Completion

no code implementations21 Feb 2022 Mohammad Bokaei, Saeed Razavikia, Arash Amini, Stefano Rini

In this paper, we study the problem of estimating the direction of arrival (DOA) using a sparsely sampled uniform linear array (ULA).

Matrix Completion Vocal Bursts Valence Prediction

Empirical Risk Minimization with Relative Entropy Regularization: Optimality and Sensitivity Analysis

no code implementations9 Feb 2022 Samir M. Perlaza, Gaetan Bisson, Iñaki Esnaola, Alain Jean-Marie, Stefano Rini

The optimality and sensitivity of the empirical risk minimization problem with relative entropy regularization (ERM-RER) are investigated for the case in which the reference is a sigma-finite measure instead of a probability measure.

Lossy Gradient Compression: How Much Accuracy Can One Bit Buy?

1 code implementation6 Feb 2022 Sadaf Salehkalaibar, Stefano Rini

Under this assumption on the DNN gradient distribution, we propose a class of distortion measures to aid the design of quantizers for the compression of the model updates.

Federated Learning

DNN gradient lossless compression: Can GenNorm be the answer?

1 code implementation15 Nov 2021 Zhong-Jing Chen, Eduin E. Hernandez, Yu-Chih Huang, Stefano Rini

In this paper we argue that, for some networks of practical interest, the gradient entries can be well modelled as having a generalized normal (GenNorm) distribution.

Federated Learning

Neural Capacity Estimators: How Reliable Are They?

1 code implementation14 Nov 2021 Farhad Mirkarimi, Stefano Rini, Nariman Farsad

Recently, several methods have been proposed for estimating the mutual information from sample data using deep neural networks and without the knowing closed form distribution of the data.

Capacity Estimation

Speeding-Up Back-Propagation in DNN: Approximate Outer Product with Memory

1 code implementation18 Oct 2021 Eduin E. Hernandez, Stefano Rini, Tolga M. Duman

In order to correct for the inherent bias in this approximation, the algorithm retains in memory an accumulation of the outer products that are not used in the approximation.

Wireless Federated Learning with Limited Communication and Differential Privacy

no code implementations1 Jun 2021 Amir Sonee, Stefano Rini, Yu-Chih Huang

This paper investigates the role of dimensionality reduction in efficient communication and differential privacy (DP) of the local datasets at the remote users for over-the-air computation (AirComp)-based federated learning (FL) model.

Dimensionality Reduction Federated Learning

Hierarchical Causal Bandit

no code implementations7 Mar 2021 Ruiyang Song, Stefano Rini, Kuang Xu

Causal bandit is a nascent learning model where an agent sequentially experiments in a causal network of variables, in order to identify the reward-maximizing intervention.

Straggler Mitigation through Unequal Error Protection for Distributed Approximate Matrix Multiplication

1 code implementation4 Mar 2021 Busra Tegin, Eduin. E. Hernandez, Stefano Rini, Tolga M. Duman

Large-scale machine learning and data mining methods routinely distribute computations across multiple agents to parallelize processing.

Image Classification Distributed, Parallel, and Cluster Computing Information Theory Information Theory

Multi-Class Unsourced Random Access via Coded Demixing

no code implementations15 Feb 2021 Vamsi K. Amalladinne, Allen Hao, Stefano Rini, Jean-Francois Chamberland

Unsourced random access (URA) is a recently proposed communication paradigm attuned to machine-driven data transfers.

Information Theory Information Theory

An Exploration of the Heterogeneous Unsourced MAC

no code implementations22 Nov 2020 Allen Hao, Stefano Rini, Vamsi Amalladinne, Asit Kumar Pradhan, Jean-Francois Chamberland

In the cluster with higher power, devices transmit using a two-layer superposition modulation.

Information Theory Information Theory

An efficient label-free analyte detection algorithm for time-resolved spectroscopy

no code implementations15 Nov 2020 Stefano Rini, Hirotsugu Hiramatsu

Time-resolved spectral techniques play an important analysis tool in many contexts, from physical chemistry to biomedicine.

Dimensionality Reduction

Efficient Federated Learning over Multiple Access Channel with Differential Privacy Constraints

no code implementations15 May 2020 Amir Sonee, Stefano Rini

Accordingly, the objective of the clients is to minimize the training loss subject to (i) rate constraints for reliable communication over the MAC and (ii) DP constraint over the local datasets.

Federated Learning Quantization

The Information & Mutual Information Ratio for Counting Image Features and Their Matches

1 code implementation14 May 2020 Ali Khajegili Mirabadi, Stefano Rini

The IR is a feature of a single image, while the MIR describes features common across two or more images. We begin by introducing the IR and the MIR and motivate these features in an information theoretical context as the ratio of the self-information of an intensity level over the information contained over the pixels of the same intensity.

Image Reconstruction

Decentralized SGD with Over-the-Air Computation

no code implementations6 Mar 2020 Emre Ozfatura, Stefano Rini, Deniz Gunduz

We study the performance of decentralized stochastic gradient descent (DSGD) in a wireless network, where the nodes collaboratively optimize an objective function using their local datasets.

Image Classification Scheduling

Compressibility Measures for Affinely Singular Random Vectors

no code implementations12 Jan 2020 Mohammad-Amin Charusaie, Arash Amini, Stefano Rini

When considering discrete-domain moving-average processes with non-Gaussian excitation noise, the above results allow us to evaluate the block-average RID and DRB, as well as to determine a relationship between these parameters and other existing compressibility measures.

Distributed Convex Optimization With Limited Communications

no code implementations29 Oct 2018 Milind Rao, Stefano Rini, Andrea Goldsmith

In this paper, a distributed convex optimization algorithm, termed \emph{distributed coordinate dual averaging} (DCDA) algorithm, is proposed.

Distributed Optimization valid

Cannot find the paper you are looking for? You can Submit a new open access paper.