Search Results for author: Stephen J. Wright

Found 15 papers, 4 papers with code

On the Complexity of a Practical Primal-Dual Coordinate Method

no code implementations19 Jan 2022 Ahmet Alacaoglu, Volkan Cevher, Stephen J. Wright

We prove complexity bounds for the primal-dual algorithm with random extrapolation and coordinate descent (PURE-CD), which has been shown to obtain good practical performance for solving convex-concave min-max problems with bilinear coupling.

Coordinate Linear Variance Reduction for Generalized Linear Programming

1 code implementation2 Nov 2021 Chaobing Song, Cheuk Yin Lin, Stephen J. Wright, Jelena Diakonikolas

\textsc{clvr} yields improved complexity results for (GLP) that depend on the max row norm of the linear constraint matrix in (GLP) rather than the spectral norm.

Variance Reduction via Primal-Dual Accelerated Dual Averaging for Nonsmooth Convex Finite-Sums

no code implementations26 Feb 2021 Chaobing Song, Stephen J. Wright, Jelena Diakonikolas

We study structured nonsmooth convex finite-sum optimization that appears widely in machine learning applications, including support vector machines and least absolute deviation.

Random Coordinate Underdamped Langevin Monte Carlo

no code implementations22 Oct 2020 Zhiyan Ding, Qin Li, Jianfeng Lu, Stephen J. Wright

We investigate the computational complexity of RC-ULMC and compare it with the classical ULMC for strongly log-concave probability distributions.

Random Coordinate Langevin Monte Carlo

no code implementations3 Oct 2020 Zhiyan Ding, Qin Li, Jianfeng Lu, Stephen J. Wright

We investigate the total complexity of RC-LMC and compare it with the classical LMC for log-concave probability distributions.

Adversarial Classification via Distributional Robustness with Wasserstein Ambiguity

no code implementations28 May 2020 Nam Ho-Nguyen, Stephen J. Wright

Inspired by this observation, we show that, for a certain class of distributions, the only stationary point of the regularized ramp loss minimization problem is the global minimizer.

Classification General Classification

Interleaved Composite Quantization for High-Dimensional Similarity Search

no code implementations18 Dec 2019 Soroosh Khoram, Stephen J. Wright, Jing Li

A method often used to reduce this computational cost is quantization of the vector space and location-based encoding of the dataset vectors.


A Distributed Quasi-Newton Algorithm for Primal and Dual Regularized Empirical Risk Minimization

1 code implementation12 Dec 2019 Ching-pei Lee, Cong Han Lim, Stephen J. Wright

When applied to the distributed dual ERM problem, unlike state of the art that takes only the block-diagonal part of the Hessian, our approach is able to utilize global curvature information and is thus magnitudes faster.

Distributed Optimization

A Distributed Quasi-Newton Algorithm for Empirical Risk Minimization with Nonsmooth Regularization

1 code implementation4 Mar 2018 Ching-pei Lee, Cong Han Lim, Stephen J. Wright

Initial computational results on convex problems demonstrate that our method significantly improves on communication cost and running time over the current state-of-the-art methods.

Distributed Optimization

Training Set Debugging Using Trusted Items

no code implementations24 Jan 2018 Xuezhou Zhang, Xiaojin Zhu, Stephen J. Wright

The set of trusted items may not by itself be adequate for learning, so we propose an algorithm that uses these items to identify bugs in the training set and thus im- proves learning.

Bilevel Optimization Machine Learning

Online Algorithms for Factorization-Based Structure from Motion

no code implementations26 Sep 2013 Ryan Kennedy, Laura Balzano, Stephen J. Wright, Camillo J. Taylor

We present a family of online algorithms for real-time factorization-based structure from motion, leveraging a relationship between incremental singular value decomposition and recently proposed methods for online matrix completion.

Matrix Completion

On GROUSE and Incremental SVD

no code implementations21 Jul 2013 Laura Balzano, Stephen J. Wright

GROUSE (Grassmannian Rank-One Update Subspace Estimation) is an incremental algorithm for identifying a subspace of Rn from a sequence of vectors in this subspace, where only a subset of components of each vector is revealed at each iteration.

Robust Dequantized Compressive Sensing

no code implementations3 Jul 2012 Ji Liu, Stephen J. Wright

We consider the reconstruction problem in compressed sensing in which the observations are recorded in a finite number of bits.

Compressive Sensing Quantization

HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent

5 code implementations28 Jun 2011 Feng Niu, Benjamin Recht, Christopher Re, Stephen J. Wright

Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.