Search Results for author: Brendan Mcmahan

Found 12 papers, 3 papers with code

Improved Differential Privacy for SGD via Optimal Private Linear Operators on Adaptive Streams

1 code implementation16 Feb 2022 Sergey Denisov, Brendan Mcmahan, Keith Rush, Adam Smith, Abhradeep Guha Thakurta

Motivated by recent applications requiring differential privacy over adaptive streams, we investigate the question of optimal instantiations of the matrix mechanism in this setting.

Federated Learning

Federated Learning with Autotuned Communication-Efficient Secure Aggregation

no code implementations30 Nov 2019 Keith Bonawitz, Fariborz Salehi, Jakub Konečný, Brendan Mcmahan, Marco Gruteser

Federated Learning enables mobile devices to collaboratively learn a shared inference model while keeping all the training data on a user's device, decoupling the ability to do machine learning from the need to store the data in the cloud.

Federated Learning

Graph Oracle Models, Lower Bounds, and Gaps for Parallel Stochastic Optimization

no code implementations NeurIPS 2018 Blake Woodworth, Jialei Wang, Adam Smith, Brendan Mcmahan, Nathan Srebro

We suggest a general oracle-based framework that captures different parallel stochastic optimization settings described by a dependency graph, and derive generic lower bounds in terms of this graph.

Stochastic Optimization

Delay-Tolerant Algorithms for Asynchronous Distributed Online Learning

no code implementations NeurIPS 2014 Brendan Mcmahan, Matthew Streeter

We analyze new online gradient descent algorithms for distributed systems with large delays between gradient computations and the corresponding updates.

Minimax Optimal Algorithms for Unconstrained Linear Optimization

no code implementations NeurIPS 2013 Brendan Mcmahan, Jacob Abernethy

We design and analyze minimax-optimal algorithms for online linear optimization games where the player's choice is unconstrained.

Estimation, Optimization, and Parallelism when Data is Sparse

no code implementations NeurIPS 2013 John Duchi, Michael. I. Jordan, Brendan Mcmahan

We study stochastic optimization problems when the \emph{data} is sparse, which is in a sense dual to the current understanding of high-dimensional statistical learning and optimization.

Stochastic Optimization

No-Regret Algorithms for Unconstrained Online Convex Optimization

no code implementations NeurIPS 2012 Brendan Mcmahan, Matthew Streeter

We present an algorithm that, without such prior knowledge, offers near-optimal regret bounds with respect to _any_ choice of x*.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.