Search Results for author: Samuel Horvath

Found 11 papers, 6 papers with code

Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop

no code implementations24 Jan 2019 Dmitry Kovalev, Samuel Horvath, Peter Richtarik

A key structural element in both of these methods is the inclusion of an outer loop at the beginning of which a full pass over the training data is made in order to compute the exact gradient, which is then used to construct a variance-reduced estimator of the gradient.

BIG-bench Machine Learning

Natural Compression for Distributed Deep Learning

no code implementations27 May 2019 Samuel Horvath, Chen-Yu Ho, Ludovit Horvath, Atal Narayan Sahu, Marco Canini, Peter Richtarik

Our technique is applied individually to all entries of the to-be-compressed update vector and works by randomized rounding to the nearest (negative or positive) power of two, which can be computed in a "natural" way by ignoring the mantissa.

Quantization

Optimal Client Sampling for Federated Learning

1 code implementation NeurIPS 2021 Wenlin Chen, Samuel Horvath, Peter Richtarik

We show that importance can be measured using only the norm of the update and give a formula for optimal client participation.

Federated Learning

Granger Causality using Neural Networks

1 code implementation7 Aug 2022 Samuel Horvath, Malik Shahid Sultan, Hernando Ombao

It helps in answering the question whether one time series is helpful in forecasting.

EEG Time Series +1

Balancing Privacy and Performance for Private Federated Learning Algorithms

no code implementations11 Apr 2023 Xiangjian Hou, Sarit Khirirat, Mohammad Yaqub, Samuel Horvath

Our findings reveal a direct correlation between the optimal number of local steps, communication rounds, and a set of variables, e. g the DP privacy budget and other problem parameters, specifically in the context of strongly convex optimization.

Federated Learning

Maestro: Uncovering Low-Rank Structures via Trainable Decomposition

no code implementations28 Aug 2023 Samuel Horvath, Stefanos Laskaridis, Shashank Rajput, Hongyi Wang

We further apply our technique on DNNs and empirically illustrate that Maestro enables the extraction of lower footprint models that preserve model performance while allowing for graceful accuracy-latency tradeoff for the deployment to devices of different capabilities.

Low-rank compression Quantization

Rethink Model Re-Basin and the Linear Mode Connectivity

1 code implementation5 Feb 2024 Xingyu Qu, Samuel Horvath

Recent studies suggest that with sufficiently wide models, most SGD solutions can, up to permutation, converge into the same basin.

Linear Mode Connectivity Re-basin

Cannot find the paper you are looking for? You can Submit a new open access paper.