no code implementations • 16 Jan 2023 • Damek Davis, Dmitriy Drusvyatskiy, Liwei Jiang

In their seminal work, Polyak and Juditsky showed that stochastic approximation algorithms for solving smooth equations enjoy a central limit theorem.

no code implementations • 4 Oct 2021 • Damek Davis, Mateo Díaz, Kaizheng Wang

We investigate a clustering problem with data from a mixture of Gaussians that share a common but unknown, and potentially ill-conditioned, covariance matrix.

no code implementations • 26 Aug 2021 • Damek Davis, Dmitriy Drusvyatskiy, Liwei Jiang

We show that the subgradient method converges only to local minimizers when applied to generic Lipschitz continuous and subdifferentially regular functions that are definable in an o-minimal structure.

no code implementations • 17 Jun 2021 • Damek Davis, Mateo Díaz, Dmitriy Drusvyatskiy

The main conclusion is that a variety of algorithms for nonsmooth optimization can escape strict saddle points of the Moreau envelope at a controlled rate.

no code implementations • 16 Dec 2019 • Damek Davis, Dmitriy Drusvyatskiy

We introduce a geometrically transparent strict saddle property for nonsmooth functions.

no code implementations • 31 Jul 2019 • Damek Davis, Dmitriy Drusvyatskiy, Lin Xiao, Junyu Zhang

Standard results in stochastic convex optimization bound the number of samples that an algorithm needs to generate a point with small function value in expectation.

1 code implementation • 22 Jul 2019 • Damek Davis, Dmitriy Drusvyatskiy, Vasileios Charisopoulos

In this work, we ask whether geometric step decay similarly improves stochastic algorithms for the class of sharp nonconvex problems.

no code implementations • 22 Apr 2019 • Vasileios Charisopoulos, Yudong Chen, Damek Davis, Mateo Díaz, Lijun Ding, Dmitriy Drusvyatskiy

The task of recovering a low-rank matrix from its noisy linear measurements plays a central role in computational science.

1 code implementation • 6 Jan 2019 • Vasileios Charisopoulos, Damek Davis, Mateo Díaz, Dmitriy Drusvyatskiy

The blind deconvolution problem seeks to recover a pair of vectors from a set of rank one bilinear measurements.

no code implementations • 17 Oct 2018 • Damek Davis, Dmitriy Drusvyatskiy

We investigate the stochastic optimization problem of minimizing population risk, where the loss defining the risk is assumed to be weakly convex.

no code implementations • 12 Oct 2018 • Jeongyeol Kwon, Wei Qian, Constantine Caramanis, Yudong Chen, Damek Davis

Recent results established that EM enjoys global convergence for Gaussian Mixture Models.

no code implementations • 1 Jul 2018 • Damek Davis, Dmitriy Drusvyatskiy, Kellie J. MacPhee

Given a nonsmooth, nonconvex minimization problem, we consider algorithms that iteratively sample and minimize stochastic convex models of the objective function.

1 code implementation • 20 Apr 2018 • Damek Davis, Dmitriy Drusvyatskiy, Sham Kakade, Jason D. Lee

This work considers the question: what convergence guarantees does the stochastic subgradient method have in the absence of smoothness and convexity?

no code implementations • 17 Mar 2018 • Damek Davis, Dmitriy Drusvyatskiy

We consider a family of algorithms that successively sample and minimize simple stochastic models of the objective function.

2 code implementations • 8 Feb 2018 • Damek Davis, Dmitriy Drusvyatskiy

We prove that the proximal stochastic subgradient method, applied to a weakly convex problem, drives the gradient of the Moreau envelope to zero at the rate $O(k^{-1/4})$.

no code implementations • 12 Jul 2017 • Damek Davis, Benjamin Grimmer

In this paper, we introduce a stochastic projected subgradient method for weakly convex (i. e., uniformly prox-regular) nonsmooth, nonconvex functions---a wide class of functions which includes the additive and convex composite classes.

no code implementations • NeurIPS 2016 • Damek Davis, Brent Edmunds, Madeleine Udell

We introduce the Stochastic Asynchronous Proximal Alternating Linearized Minimization (SAPALM) method, a block coordinate stochastic proximal-gradient method for solving nonconvex, nonsmooth optimization problems.

no code implementations • 4 Oct 2016 • Aleksandr Aravkin, Damek Davis

In this paper, we show how to transform any optimization problem that arises from fitting a machine learning model into one that (1) detects and removes contaminated data from the training set while (2) simultaneously fitting the trimmed model on the uncontaminated data that remains.

no code implementations • 9 Jul 2016 • Rajiv Kumar, Oscar López, Damek Davis, Aleksandr Y. Aravkin, Felix J. Herrmann

Acquisition cost is a crucial bottleneck for seismic workflows, and low-rank formulations for data interpolation allow practitioners to `fill in' data volumes from critically subsampled data acquired in the field.

no code implementations • CVPR 2015 • Jingming Dong, Nikolaos Karianakis, Damek Davis, Joshua Hernandez, Jonathan Balzer, Stefano Soatto

We frame the problem of local representation of imaging data as the computation of minimal sufficient statistics that are invariant to nuisance variability induced by viewpoint and illumination.

no code implementations • 5 May 2015 • Damek Davis

The ordered weighted $\ell_1$ (OWL) norm is a newly developed generalization of the Octogonal Shrinkage and Clustering Algorithm for Regression (OSCAR) norm.

no code implementations • CVPR 2014 • Damek Davis, Jonathan Balzer, Stefano Soatto

We introduce an asymmetric sparse approximate embedding optimized for fast kernel comparison operations arising in large-scale visual search.

no code implementations • 23 Nov 2013 • Jingming Dong, Jonathan Balzer, Damek Davis, Joshua Hernandez, Stefano Soatto

We propose an extension of popular descriptors based on gradient orientation histograms (HOG, computed in a single image) to multiple views.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.