Search Results for author: Albert S. Berahas

Found 16 papers, 3 papers with code

Second-order Information Promotes Mini-Batch Robustness in Variance-Reduced Gradients

no code implementations23 Apr 2024 Sachin Garg, Albert S. Berahas, Michał Dereziński

We show that, for finite-sum minimization problems, incorporating partial second-order information of the objective function can dramatically improve the robustness to mini-batch size of variance-reduced stochastic gradient methods, making them more scalable while retaining their benefits over traditional Newton-type approaches.

Non-Uniform Smoothness for Gradient Descent

no code implementations15 Nov 2023 Albert S. Berahas, Lindon Roberts, Fred Roosta

The analysis of gradient descent-type methods typically relies on the Lipschitz continuity of the objective gradient.

Adaptive Consensus: A network pruning approach for decentralized optimization

no code implementations6 Sep 2023 Suhail M. Shah, Albert S. Berahas, Raghu Bollapragada

We consider network-based decentralized optimization problems, where each node in the network possesses a local function and the objective is to collectively attain a consensus solution that minimizes the sum of all the local functions.

Network Pruning

Collaborative and Distributed Bayesian Optimization via Consensus: Showcasing the Power of Collaboration for Optimal Design

1 code implementation25 Jun 2023 Xubo Yue, Raed Al Kontar, Albert S. Berahas, Yang Liu, Blake N. Johnson

Empirically, through simulated datasets and a real-world collaborative sensor design experiment, we show that our framework can effectively accelerate and improve the optimal design process and benefit all participants.

Bayesian Optimization

A Sequential Quadratic Programming Method with High Probability Complexity Bounds for Nonlinear Equality Constrained Stochastic Optimization

no code implementations1 Jan 2023 Albert S. Berahas, Miaolan Xie, Baoyu Zhou

A step-search sequential quadratic programming method is proposed for solving nonlinear equality constrained stochastic optimization problems.

Stochastic Optimization

A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear Equality Constrained Optimization with Rank-Deficient Jacobians

1 code implementation24 Jun 2021 Albert S. Berahas, Frank E. Curtis, Michael J. O'Neill, Daniel P. Robinson

A sequential quadratic optimization algorithm is proposed for solving smooth nonlinear equality constrained optimization problems in which the objective function is defined by an expectation of a stochastic function.

Finite Difference Neural Networks: Fast Prediction of Partial Differential Equations

no code implementations2 Jun 2020 Zheng Shi, Nur Sila Gulgec, Albert S. Berahas, Shamim N. Pakzad, Martin Takáč

Discovering the underlying behavior of complex systems is an important topic in many science and engineering disciplines.

Scaling Up Quasi-Newton Algorithms: Communication Efficient Distributed SR1

no code implementations30 May 2019 Majid Jahani, MohammadReza Nazari, Sergey Rusakov, Albert S. Berahas, Martin Takáč

In this paper, we present a scalable distributed implementation of the Sampled Limited-memory Symmetric Rank-1 (S-LSR1) algorithm.

Linear interpolation gives better gradients than Gaussian smoothing in derivative-free optimization

no code implementations29 May 2019 Albert S. Berahas, Liyuan Cao, Krzysztof Choromanski, Katya Scheinberg

We then demonstrate via rigorous analysis of the variance and by numerical comparisons on reinforcement learning tasks that the Gaussian sampling method used in [Salimans et al. 2016] is significantly inferior to the orthogonal sampling used in [Choromaski et al. 2018] as well as more general interpolation methods.

reinforcement-learning Reinforcement Learning (RL)

A Theoretical and Empirical Comparison of Gradient Approximations in Derivative-Free Optimization

no code implementations3 May 2019 Albert S. Berahas, Liyuan Cao, Krzysztof Choromanski, Katya Scheinberg

To this end, we use the results in [Berahas et al., 2019] and show how each method can satisfy the sufficient conditions, possibly only with some sufficiently large probability at each iteration, as happens to be the case with Gaussian smoothing and smoothing on a sphere.

Optimization and Control

Quasi-Newton Methods for Machine Learning: Forget the Past, Just Sample

1 code implementation28 Jan 2019 Albert S. Berahas, Majid Jahani, Peter Richtárik, Martin Takáč

We present two sampled quasi-Newton methods (sampled LBFGS and sampled LSR1) for solving empirical risk minimization problems that arise in machine learning.

Benchmarking BIG-bench Machine Learning +3

A Robust Multi-Batch L-BFGS Method for Machine Learning

no code implementations26 Jul 2017 Albert S. Berahas, Martin Takáč

This paper describes an implementation of the L-BFGS method designed to deal with two adversarial situations.

BIG-bench Machine Learning Binary Classification +1

An Investigation of Newton-Sketch and Subsampled Newton Methods

no code implementations17 May 2017 Albert S. Berahas, Raghu Bollapragada, Jorge Nocedal

Sketching, a dimensionality reduction technique, has received much attention in the statistics community.

Dimensionality Reduction

A Multi-Batch L-BFGS Method for Machine Learning

no code implementations NeurIPS 2016 Albert S. Berahas, Jorge Nocedal, Martin Takáč

The question of how to parallelize the stochastic gradient descent (SGD) method has received much attention in the literature.

BIG-bench Machine Learning Distributed Computing

adaQN: An Adaptive Quasi-Newton Algorithm for Training RNNs

no code implementations4 Nov 2015 Nitish Shirish Keskar, Albert S. Berahas

In this paper, we present adaQN, a stochastic quasi-Newton algorithm for training RNNs.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.