Search Results for author: Xingguo Li

Found 28 papers, 3 papers with code

Provable Online CP/PARAFAC Decomposition of a Structured Tensor via Dictionary Learning

1 code implementation NeurIPS 2020 Sirisha Rambhatla, Xingguo Li, Jarvis Haupt

To this end, we develop a provable algorithm for online structured tensor factorization, wherein one of the factors obeys some incoherence conditions, and the others are sparse.

Dictionary Learning

The flare Package for High Dimensional Linear Regression and Precision Matrix Estimation in R

no code implementations27 Jun 2020 Xingguo Li, Tuo Zhao, Xiaoming Yuan, Han Liu

This paper describes an R package named flare, which implements a family of new high dimensional regression methods (LAD Lasso, SQRT Lasso, $\ell_q$ Lasso, and Dantzig selector) and their extensions to sparse precision matrix estimation (TIGER and CLIME).

regression

Picasso: A Sparse Learning Library for High Dimensional Data Analysis in R and Python

1 code implementation27 Jun 2020 Jason Ge, Xingguo Li, Haoming Jiang, Han Liu, Tong Zhang, Mengdi Wang, Tuo Zhao

We describe a new library named picasso, which implements a unified framework of pathwise coordinate optimization for a variety of sparse learning problems (e. g., sparse linear regression, sparse logistic regression, sparse Poisson regression and scaled sparse linear regression) combined with efficient active set selection strategies.

regression Sparse Learning

Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality

no code implementations NeurIPS 2020 Yi Zhang, Orestis Plevrakis, Simon S. Du, Xingguo Li, Zhao Song, Sanjeev Arora

Our work proves convergence to low robust training loss for \emph{polynomial} width instead of exponential, under natural assumptions and with the ReLU activation.

On Recoverability of Randomly Compressed Tensors with Low CP Rank

no code implementations8 Jan 2020 Shahana Ibrahim, Xiao Fu, Xingguo Li

Our interest lies in the recoverability properties of compressed tensors under the \textit{canonical polyadic decomposition} (CPD) model.

Compressive Sensing Video Compression

On Generalization Bounds of a Family of Recurrent Neural Networks

no code implementations ICLR 2019 Minshuo Chen, Xingguo Li, Tuo Zhao

We remark: (1) Our generalization bound for vanilla RNNs is significantly tighter than the best of existing results; (2) We are not aware of any other generalization bounds for MGU, LSTM, and Conv RNNs in the exiting literature; (3) We demonstrate the advantages of these variants in generalization.

Generalization Bounds PAC learning

ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization

1 code implementation NeurIPS 2019 Xiangyi Chen, Sijia Liu, Kaidi Xu, Xingguo Li, Xue Lin, Mingyi Hong, David Cox

In this paper, we propose a zeroth-order AdaMM (ZO-AdaMM) algorithm, that generalizes AdaMM to the gradient-free regime.

Provable Online Dictionary Learning and Sparse Coding

no code implementations ICLR 2019 Sirisha Rambhatla, Xingguo Li, Jarvis Haupt

To this end, we develop a simple online alternating optimization-based algorithm for dictionary learning, which recovers both the dictionary and coefficients exactly at a geometric rate.

Dictionary Learning

On Tighter Generalization Bounds for Deep Neural Networks: CNNs, ResNets, and Beyond

no code implementations ICLR 2019 Xingguo Li, Junwei Lu, Zhaoran Wang, Jarvis Haupt, Tuo Zhao

We propose a generalization error bound for a general family of deep neural networks based on the depth and width of the networks, as well as the spectral norm of weight matrices.

Generalization Bounds

NOODL: Provable Online Dictionary Learning and Sparse Coding

no code implementations28 Feb 2019 Sirisha Rambhatla, Xingguo Li, Jarvis Haupt

We consider the dictionary learning problem, where the aim is to model the given data as a linear combination of a few columns of a matrix known as a dictionary, where the sparse weights forming the linear combination are known as coefficients.

Dictionary Learning

A Dictionary-Based Generalization of Robust PCA Part II: Applications to Hyperspectral Demixing

no code implementations26 Feb 2019 Sirisha Rambhatla, Xingguo Li, Jineng Ren, Jarvis Haupt

We consider the task of localizing targets of interest in a hyperspectral (HS) image based on their spectral signature(s), by posing the problem as two distinct convex demixing task(s).

Target-based Hyperspectral Demixing via Generalized Robust PCA

no code implementations26 Feb 2019 Sirisha Rambhatla, Xingguo Li, Jarvis Haupt

In this work, we present a technique to localize targets of interest based on their spectral signatures.

A Dictionary Based Generalization of Robust PCA

no code implementations21 Feb 2019 Sirisha Rambhatla, Xingguo Li, Jarvis Haupt

We analyze the decomposition of a data matrix, assumed to be a superposition of a low-rank component and a component which is sparse in a known dictionary, using a convex demixing method.

A Dictionary-Based Generalization of Robust PCA with Applications to Target Localization in Hyperspectral Imaging

no code implementations21 Feb 2019 Sirisha Rambhatla, Xingguo Li, Jineng Ren, Jarvis Haupt

We consider the decomposition of a data matrix assumed to be a superposition of a low-rank matrix and a component which is sparse in a known dictionary, using a convex demixing method.

On Tighter Generalization Bound for Deep Neural Networks: CNNs, ResNets, and Beyond

no code implementations13 Jun 2018 Xingguo Li, Junwei Lu, Zhaoran Wang, Jarvis Haupt, Tuo Zhao

We establish a margin based data dependent generalization error bound for a general family of deep neural networks in terms of the depth and width, as well as the Jacobian of the networks.

Generalization Bounds

On Landscape of Lagrangian Functions and Stochastic Search for Constrained Nonconvex Optimization

no code implementations13 Jun 2018 Zhehui Chen, Xingguo Li, Lin F. Yang, Jarvis Haupt, Tuo Zhao

However, due to the lack of convexity, their landscape is not well understood and how to find the stable equilibria of the Lagrangian function is still unknown.

Deep Hyperspherical Learning

no code implementations NeurIPS 2017 Weiyang Liu, Yan-Ming Zhang, Xingguo Li, Zhiding Yu, Bo Dai, Tuo Zhao, Le Song

In light of such challenges, we propose hyperspherical convolution (SphereConv), a novel learning framework that gives angular representations on hyperspheres.

Representation Learning

Towards Black-box Iterative Machine Teaching

no code implementations ICML 2018 Weiyang Liu, Bo Dai, Xingguo Li, Zhen Liu, James M. Rehg, Le Song

We propose an active teacher model that can actively query the learner (i. e., make the learner take exams) for estimating the learner's status and provably guide the learner to achieve faster convergence.

Near Optimal Sketching of Low-Rank Tensor Regression

no code implementations NeurIPS 2017 Jarvis Haupt, Xingguo Li, David P. Woodruff

We study the least squares regression problem \begin{align*} \min_{\Theta \in \mathcal{S}_{\odot D, R}} \|A\Theta-b\|_2, \end{align*} where $\mathcal{S}_{\odot D, R}$ is the set of $\Theta$ for which $\Theta = \sum_{r=1}^{R} \theta_1^{(r)} \circ \cdots \circ \theta_D^{(r)}$ for vectors $\theta_d^{(r)} \in \mathbb{R}^{p_d}$ for all $r \in [R]$ and $d \in [D]$, and $\circ$ denotes the outer product of vectors.

Dimensionality Reduction regression

On Quadratic Convergence of DC Proximal Newton Algorithm for Nonconvex Sparse Learning in High Dimensions

no code implementations19 Jun 2017 Xingguo Li, Lin F. Yang, Jason Ge, Jarvis Haupt, Tong Zhang, Tuo Zhao

We propose a DC proximal Newton algorithm for solving nonconvex regularized sparse learning problems in high dimensions.

Sparse Learning

Symmetry, Saddle Points, and Global Optimization Landscape of Nonconvex Matrix Factorization

no code implementations29 Dec 2016 Xingguo Li, Junwei Lu, Raman Arora, Jarvis Haupt, Han Liu, Zhaoran Wang, Tuo Zhao

We propose a general theory for studying the \xl{landscape} of nonconvex \xl{optimization} with underlying symmetric structures \tz{for a class of machine learning problems (e. g., low-rank matrix factorization, phase retrieval, and deep linear neural networks)}.

Retrieval

Robust Low-Complexity Randomized Methods for Locating Outliers in Large Matrices

no code implementations7 Dec 2016 Xingguo Li, Jarvis Haupt

This paper examines the problem of locating outlier columns in a large, otherwise low-rank matrix, in settings where {}{the data} are noisy, or where the overall matrix has missing elements.

Computational Efficiency Missing Elements

On Faster Convergence of Cyclic Block Coordinate Descent-type Methods for Strongly Convex Minimization

no code implementations10 Jul 2016 Xingguo Li, Tuo Zhao, Raman Arora, Han Liu, Mingyi Hong

In particular, we first show that for a family of quadratic minimization problems, the iteration complexity $\mathcal{O}(\log^2(p)\cdot\log(1/\epsilon))$ of the CBCD-type methods matches that of the GD methods in term of dependency on $p$, up to a $\log^2 p$ factor.

regression

On Fast Convergence of Proximal Algorithms for SQRT-Lasso Optimization: Don't Worry About Its Nonsmooth Loss Function

no code implementations25 May 2016 Xingguo Li, Haoming Jiang, Jarvis Haupt, Raman Arora, Han Liu, Mingyi Hong, Tuo Zhao

Many machine learning techniques sacrifice convenient computational structures to gain estimation robustness and modeling flexibility.

regression

Nonconvex Sparse Learning via Stochastic Optimization with Progressive Variance Reduction

no code implementations9 May 2016 Xingguo Li, Raman Arora, Han Liu, Jarvis Haupt, Tuo Zhao

We propose a stochastic variance reduced optimization algorithm for solving sparse learning problems with cardinality constraints.

Sparse Learning Stochastic Optimization

Identifying Outliers in Large Matrices via Randomized Adaptive Compressive Sampling

no code implementations1 Jul 2014 Xingguo Li, Jarvis Haupt

This paper examines the problem of locating outlier columns in a large, otherwise low-rank, matrix.

Collaborative Filtering

Cannot find the paper you are looking for? You can Submit a new open access paper.