no code implementations • 23 Mar 2020 • Sina Baghal, Courtney Paquette, Stephen A. Vavasis
We propose a new, simple, and computationally inexpensive termination test for constant step-size stochastic gradient descent (SGD) applied to binary classification on the logistic and hinge loss with homogeneous linear predictors.
no code implementations • 30 Sep 2015 • Nicolas Gillis, Stephen A. Vavasis
The low-rank matrix approximation problem with respect to the component-wise $\ell_1$-norm ($\ell_1$-LRA), which is closely related to robust principal component analysis (PCA), has become a very popular tool in data mining and machine learning.
no code implementations • 8 Oct 2013 • Nicolas Gillis, Stephen A. Vavasis
Nonnegative matrix factorization (NMF) under the separability assumption can provably be solved efficiently, even in the presence of noise, and has been shown to be a powerful technique in document classification and hyperspectral unmixing.
no code implementations • 6 Aug 2012 • Nicolas Gillis, Stephen A. Vavasis
In this paper, we study the nonnegative matrix factorization problem under the separability assumption (that is, there exists a cone spanned by a small subset of the columns of the input nonnegative data matrix containing all columns), which is equivalent to the hyperspectral unmixing problem under the linear mixing model and the pure-pixel assumption.