Search Results for author: Brian Bullins

Found 14 papers, 4 papers with code

Second-Order Stochastic Optimization for Machine Learning in Linear Time

4 code implementations12 Feb 2016 Naman Agarwal, Brian Bullins, Elad Hazan

First-order stochastic methods are the state-of-the-art in large-scale machine learning optimization owing to efficient per-iteration complexity.

BIG-bench Machine Learning Second-order methods +1

Finding Approximate Local Minima Faster than Gradient Descent

1 code implementation3 Nov 2016 Naman Agarwal, Zeyuan Allen-Zhu, Brian Bullins, Elad Hazan, Tengyu Ma

We design a non-convex second-order optimization algorithm that is guaranteed to return an approximate local minimum in time which scales linearly in the underlying dimension and the number of training examples.

BIG-bench Machine Learning

The Limits of Learning with Missing Data

no code implementations NeurIPS 2016 Brian Bullins, Elad Hazan, Tomer Koren

We study regression and classification in a setting where the learning algorithm is allowed to access only a limited number of attributes per example, known as the limited attribute observation model.

Attribute General Classification +1

Not-So-Random Features

1 code implementation ICLR 2018 Brian Bullins, Cyril Zhang, Yi Zhang

We propose a principled method for kernel learning, which relies on a Fourier-analytic characterization of translation-invariant or rotation-invariant kernels.

Translation

Efficient Full-Matrix Adaptive Regularization

no code implementations ICLR 2019 Naman Agarwal, Brian Bullins, Xinyi Chen, Elad Hazan, Karan Singh, Cyril Zhang, Yi Zhang

Due to the large number of parameters of machine learning problems, full-matrix preconditioning methods are prohibitively expensive.

Online Control with Adversarial Disturbances

no code implementations23 Feb 2019 Naman Agarwal, Brian Bullins, Elad Hazan, Sham M. Kakade, Karan Singh

We study the control of a linear dynamical system with adversarial disturbances (as opposed to statistical noise).

Higher-Order Accelerated Methods for Faster Non-Smooth Optimization

no code implementations4 Jun 2019 Brian Bullins, Richard Peng

We provide improved convergence rates for various \emph{non-smooth} optimization problems via higher-order accelerated methods.

Is Local SGD Better than Minibatch SGD?

no code implementations ICML 2020 Blake Woodworth, Kumar Kshitij Patel, Sebastian U. Stich, Zhen Dai, Brian Bullins, H. Brendan McMahan, Ohad Shamir, Nathan Srebro

We study local SGD (also known as parallel SGD and federated averaging), a natural and frequently used stochastic distributed optimization method.

Distributed Optimization

Higher-order methods for convex-concave min-max optimization and monotone variational inequalities

no code implementations9 Jul 2020 Brian Bullins, Kevin A. Lai

We provide improved convergence rates for constrained convex-concave min-max problems and monotone variational inequalities with higher-order smoothness.

The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication

no code implementations2 Feb 2021 Blake Woodworth, Brian Bullins, Ohad Shamir, Nathan Srebro

We resolve the min-max complexity of distributed stochastic convex optimization (up to a log factor) in the intermittent communication setting, where $M$ machines work in parallel over the course of $R$ rounds of communication to optimize the objective, and during each round of communication, each machine may sequentially compute $K$ stochastic gradient estimates.

A Stochastic Newton Algorithm for Distributed Convex Optimization

no code implementations NeurIPS 2021 Brian Bullins, Kumar Kshitij Patel, Ohad Shamir, Nathan Srebro, Blake Woodworth

We propose and analyze a stochastic Newton algorithm for homogeneous distributed stochastic convex optimization, where each machine can calculate stochastic gradients of the same population objective, as well as stochastic Hessian-vector products (products of an independent unbiased estimator of the Hessian of the population objective with arbitrary vectors), with many such stochastic computations performed between rounds of communication.

regression

Variance-Reduced Conservative Policy Iteration

no code implementations12 Dec 2022 Naman Agarwal, Brian Bullins, Karan Singh

We study the sample complexity of reducing reinforcement learning to a sequence of empirical risk minimization problems over the policy space.

reinforcement-learning Reinforcement Learning (RL)

Beyond first-order methods for non-convex non-concave min-max optimization

1 code implementation17 Apr 2023 Abhijeet Vyas, Brian Bullins

We propose a study of structured non-convex non-concave min-max problems which goes beyond standard first-order approaches.

Federated Composite Saddle Point Optimization

no code implementations25 May 2023 Site Bai, Brian Bullins

Federated learning (FL) approaches for saddle point problems (SPP) have recently gained in popularity due to the critical role they play in machine learning (ML).

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.