Search Results for author: Frank E. Curtis

Found 12 papers, 7 papers with code

Almost-sure convergence of iterates and multipliers in stochastic sequential quadratic optimization

no code implementations7 Aug 2023 Frank E. Curtis, Xin Jiang, Qi Wang

In this paper, new almost-sure convergence guarantees for the primal iterates, Lagrange multipliers, and stationarity measures generated by a stochastic SQP algorithm in this subclass of methods are proved.

A Stochastic-Gradient-based Interior-Point Algorithm for Solving Smooth Bound-Constrained Optimization Problems

no code implementations28 Apr 2023 Frank E. Curtis, Vyacheslav Kungurtsev, Daniel P. Robinson, Qi Wang

A stochastic-gradient-based interior-point algorithm for minimizing a continuously differentiable objective function (that may be nonconvex) subject to bound constraints is presented, analyzed, and demonstrated through experimental results.

A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear Equality Constrained Optimization with Rank-Deficient Jacobians

1 code implementation24 Jun 2021 Albert S. Berahas, Frank E. Curtis, Michael J. O'Neill, Daniel P. Robinson

A sequential quadratic optimization algorithm is proposed for solving smooth nonlinear equality constrained optimization problems in which the objective function is defined by an expectation of a stochastic function.

A Subspace Acceleration Method for Minimization Involving a Group Sparsity-Inducing Regularizer

1 code implementation29 Jul 2020 Frank E. Curtis, Yutong Dai, Daniel P. Robinson

We consider the problem of minimizing an objective function that is the sum of a convex function and a group sparsity-inducing regularizer.

Optimization and Control 49M37, 65K05, 65K10, 65Y20, 68Q25, 90C30, 90C60

Sequential Quadratic Optimization for Nonlinear Equality Constrained Stochastic Optimization

1 code implementation20 Jul 2020 Albert Berahas, Frank E. Curtis, Daniel P. Robinson, Baoyu Zhou

It is assumed in this setting that it is intractable to compute objective function and derivative values explicitly, although one can compute stochastic function and gradient estimates.

Stochastic Optimization

Gradient Sampling Methods with Inexact Subproblem Solutions and Gradient Aggregation

1 code implementation15 May 2020 Frank E. Curtis, Minhan Li

In this paper, a strategy is proposed that allows the use of inexact solutions of these subproblems, which, as proved in the paper, can be incorporated without the loss of theoretical convergence guarantees.

Optimization and Control

A Stochastic Trust Region Algorithm Based on Careful Step Normalization

no code implementations29 Dec 2017 Frank E. Curtis, Katya Scheinberg, Rui Shi

An algorithm is proposed for solving stochastic and finite sum minimization problems.

An Accelerated Communication-Efficient Primal-Dual Optimization Framework for Structured Machine Learning

1 code implementation14 Nov 2017 Chenxin Ma, Martin Jaggi, Frank E. Curtis, Nathan Srebro, Martin Takáč

In this paper, an accelerated variant of CoCoA+ is proposed and shown to possess a convergence rate of $\mathcal{O}(1/t^2)$ in terms of reducing suboptimality.

BIG-bench Machine Learning Distributed Optimization

Optimization Methods for Supervised Machine Learning: From Linear Models to Deep Learning

1 code implementation30 Jun 2017 Frank E. Curtis, Katya Scheinberg

We then discuss some of the distinctive features of these optimization problems, focusing on the examples of logistic regression and the training of deep neural networks.

BIG-bench Machine Learning regression +1

Optimization Methods for Large-Scale Machine Learning

4 code implementations15 Jun 2016 Léon Bottou, Frank E. Curtis, Jorge Nocedal

This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications.

BIG-bench Machine Learning Text Classification

Primal-Dual Active-Set Methods for Isotonic Regression and Trend Filtering

no code implementations10 Aug 2015 Zheng Han, Frank E. Curtis

Isotonic regression (IR) is a non-parametric calibration method used in supervised learning.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.