Search Results for author: Jakub Konečný

Found 26 papers, 8 papers with code

Optimizing the Communication-Accuracy Trade-off in Federated Learning with Rate-Distortion Theory

1 code implementation7 Jan 2022 Nicole Mitchell, Johannes Ballé, Zachary Charles, Jakub Konečný

A significant bottleneck in federated learning (FL) is the network communication cost of sending model updates from client devices to the central server.

Federated Learning Quantization

Convergence and Accuracy Trade-Offs in Federated Learning and Meta-Learning

no code implementations8 Mar 2021 Zachary Charles, Jakub Konečný

Using these insights, we are able to compare local update methods based on their convergence/accuracy trade-off, not just their convergence to critical points of the empirical loss.

Federated Learning Meta-Learning

On the Outsized Importance of Learning Rates in Local Update Methods

1 code implementation2 Jul 2020 Zachary Charles, Jakub Konečný

We study a family of algorithms, which we refer to as local update methods, that generalize many federated learning and meta-learning algorithms.

Federated Learning Meta-Learning

Adaptive Federated Optimization

3 code implementations ICLR 2021 Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečný, Sanjiv Kumar, H. Brendan McMahan

Federated learning is a distributed machine learning paradigm in which a large number of clients coordinate with a central server to learn a model without sharing their own training data.

Federated Learning

Federated Learning with Autotuned Communication-Efficient Secure Aggregation

no code implementations30 Nov 2019 Keith Bonawitz, Fariborz Salehi, Jakub Konečný, Brendan Mcmahan, Marco Gruteser

Federated Learning enables mobile devices to collaboratively learn a shared inference model while keeping all the training data on a user's device, decoupling the ability to do machine learning from the need to store the data in the cloud.

Federated Learning

Improving Federated Learning Personalization via Model Agnostic Meta Learning

1 code implementation27 Sep 2019 Yihan Jiang, Jakub Konečný, Keith Rush, Sreeram Kannan

We present FL as a natural source of practical applications for MAML algorithms, and make the following observations.

Federated Learning Meta-Learning

A Privacy Preserving Randomized Gossip Algorithm via Controlled Noise Insertion

no code implementations27 Jan 2019 Filip Hanzely, Jakub Konečný, Nicolas Loizou, Peter Richtárik, Dmitry Grishchenko

In this work we present a randomized gossip algorithm for solving the average consensus problem while at the same time protecting the information about the initial private values stored at the nodes.

LEAF: A Benchmark for Federated Settings

5 code implementations3 Dec 2018 Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Konečný, H. Brendan McMahan, Virginia Smith, Ameet Talwalkar

Modern federated networks, such as those comprised of wearable devices, mobile phones, or autonomous vehicles, generate massive amounts of data each day.

Autonomous Vehicles Federated Learning +2

Stochastic, Distributed and Federated Optimization for Machine Learning

no code implementations4 Jul 2017 Jakub Konečný

Finally, we introduce the concept of Federated Optimization/Learning, where we try to solve the machine learning problems without having data stored in any centralized manner.

Distributed Optimization

Privacy Preserving Randomized Gossip Algorithms

no code implementations23 Jun 2017 Filip Hanzely, Jakub Konečný, Nicolas Loizou, Peter Richtárik, Dmitry Grishchenko

In this work we present three different randomized gossip algorithms for solving the average consensus problem while at the same time protecting the information about the initial private values stored at the nodes.

Optimization and Control

Randomized Distributed Mean Estimation: Accuracy vs Communication

no code implementations22 Nov 2016 Jakub Konečný, Peter Richtárik

We consider the problem of estimating the arithmetic average of a finite collection of real vectors stored in a distributed fashion across several compute nodes subject to a communication budget constraint.

Federated Learning: Strategies for Improving Communication Efficiency

no code implementations ICLR 2018 Jakub Konečný, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, Dave Bacon

We consider learning algorithms for this setting where on each round, each client independently computes an update to the current model based on its local data, and communicates this update to a central server, where the client-side updates are aggregated to compute a new global model.

Federated Learning Quantization

Distributed Optimization with Arbitrary Local Solvers

1 code implementation13 Dec 2015 Chenxin Ma, Jakub Konečný, Martin Jaggi, Virginia Smith, Michael. I. Jordan, Peter Richtárik, Martin Takáč

To this end, we present a framework for distributed optimization that both allows the flexibility of arbitrary solvers to be used on each (single) machine locally, and yet maintains competitive performance against other state-of-the-art special-purpose distributed methods.

Distributed Optimization

StopWasting My Gradients: Practical SVRG

no code implementations NeurIPS 2015 Reza Harikandeh, Mohamed Osama Ahmed, Alim Virani, Mark Schmidt, Jakub Konečný, Scott Sallinen

We present and analyze several strategies for improving the performance ofstochastic variance-reduced gradient (SVRG) methods.

Stop Wasting My Gradients: Practical SVRG

no code implementations5 Nov 2015 Reza Babanezhad, Mohamed Osama Ahmed, Alim Virani, Mark Schmidt, Jakub Konečný, Scott Sallinen

We present and analyze several strategies for improving the performance of stochastic variance-reduced gradient (SVRG) methods.

Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting

no code implementations16 Apr 2015 Jakub Konečný, Jie Liu, Peter Richtárik, Martin Takáč

Our method first performs a deterministic step (computation of the gradient of the objective function at the starting point), followed by a large number of stochastic steps.

mS2GD: Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting

no code implementations17 Oct 2014 Jakub Konečný, Jie Liu, Peter Richtárik, Martin Takáč

Our method first performs a deterministic step (computation of the gradient of the objective function at the starting point), followed by a large number of stochastic steps.

One-Shot-Learning Gesture Recognition using HOG-HOF Features

no code implementations15 Dec 2013 Jakub Konečný, Michal Hagara

We use RGB and depth images and combine appearance (Histograms of Oriented Gradients) and motion descriptors (Histogram of Optical Flow) for parallel temporal segmentation and recognition.

Dynamic Time Warping Gesture Recognition +2

Semi-Stochastic Gradient Descent Methods

no code implementations5 Dec 2013 Jakub Konečný, Peter Richtárik

The total work needed for the method to output an $\varepsilon$-accurate solution in expectation, measured in the number of passes over data, or equivalently, in units equivalent to the computation of a single gradient of the loss, is $O((\kappa/n)\log(1/\varepsilon))$, where $\kappa$ is the condition number.

Cannot find the paper you are looking for? You can Submit a new open access paper.