no code implementations • 12 Mar 2024 • Akshay Kumar, Jarvis Haupt
This paper studies the gradient flow dynamics that arise when training deep homogeneous neural networks assumed to have locally Lipschitz gradients and an order of homogeneity strictly greater than two.
no code implementations • 14 Feb 2024 • Akshay Kumar, Jarvis Haupt
This paper examines gradient flow dynamics of two-homogeneous neural networks for small initializations, where all weights are initialized near the origin.
no code implementations • 23 Feb 2021 • Navid Reyhanian, Jarvis Haupt
This work investigates the problem of estimating the weight matrices of a stable time-invariant linear dynamical system from a single sequence of noisy measurements.
no code implementations • 15 Jul 2020 • Akshay Kumar, Jarvis Haupt
This work examines the problem of exact data interpolation via sparse (neuron count), infinitely wide, single hidden layer neural networks with leaky rectified linear unit activations.
1 code implementation • NeurIPS 2020 • Sirisha Rambhatla, Xingguo Li, Jarvis Haupt
To this end, we develop a provable algorithm for online structured tensor factorization, wherein one of the factors obeys some incoherence conditions, and the others are sparse.
no code implementations • ICLR 2019 • Xingguo Li, Junwei Lu, Zhaoran Wang, Jarvis Haupt, Tuo Zhao
We propose a generalization error bound for a general family of deep neural networks based on the depth and width of the networks, as well as the spectral norm of weight matrices.
no code implementations • ICLR 2019 • Sirisha Rambhatla, Xingguo Li, Jarvis Haupt
To this end, we develop a simple online alternating optimization-based algorithm for dictionary learning, which recovers both the dictionary and coefficients exactly at a geometric rate.
no code implementations • 16 Mar 2019 • Jineng Ren, Jarvis Haupt
This paper proposes and analyzes a communication-efficient distributed optimization framework for general nonconvex nonsmooth signal processing and machine learning problems under an asynchronous protocol.
no code implementations • 28 Feb 2019 • Sirisha Rambhatla, Xingguo Li, Jarvis Haupt
We consider the dictionary learning problem, where the aim is to model the given data as a linear combination of a few columns of a matrix known as a dictionary, where the sparse weights forming the linear combination are known as coefficients.
no code implementations • 26 Feb 2019 • Sirisha Rambhatla, Xingguo Li, Jineng Ren, Jarvis Haupt
We consider the task of localizing targets of interest in a hyperspectral (HS) image based on their spectral signature(s), by posing the problem as two distinct convex demixing task(s).
no code implementations • 26 Feb 2019 • Sirisha Rambhatla, Nikos D. Sidiropoulos, Jarvis Haupt
We propose a technique to develop (and localize in) topological maps from light detection and ranging (Lidar) data.
no code implementations • 26 Feb 2019 • Sirisha Rambhatla, Xingguo Li, Jarvis Haupt
In this work, we present a technique to localize targets of interest based on their spectral signatures.
no code implementations • 21 Feb 2019 • Sirisha Rambhatla, Xingguo Li, Jarvis Haupt
We analyze the decomposition of a data matrix, assumed to be a superposition of a low-rank component and a component which is sparse in a known dictionary, using a convex demixing method.
no code implementations • 21 Feb 2019 • Sirisha Rambhatla, Xingguo Li, Jineng Ren, Jarvis Haupt
We consider the decomposition of a data matrix assumed to be a superposition of a low-rank matrix and a component which is sparse in a known dictionary, using a convex demixing method.
no code implementations • 13 Jun 2018 • Xingguo Li, Junwei Lu, Zhaoran Wang, Jarvis Haupt, Tuo Zhao
We establish a margin based data dependent generalization error bound for a general family of deep neural networks in terms of the depth and width, as well as the Jacobian of the networks.
no code implementations • 13 Jun 2018 • Zhehui Chen, Xingguo Li, Lin F. Yang, Jarvis Haupt, Tuo Zhao
However, due to the lack of convexity, their landscape is not well understood and how to find the stable equilibria of the Lagrangian function is still unknown.
no code implementations • NeurIPS 2017 • Xingguo Li, Lin Yang, Jason Ge, Jarvis Haupt, Tong Zhang, Tuo Zhao
We propose a DC proximal Newton algorithm for solving nonconvex regularized sparse learning problems in high dimensions.
no code implementations • NeurIPS 2017 • Jarvis Haupt, Xingguo Li, David P. Woodruff
We study the least squares regression problem \begin{align*} \min_{\Theta \in \mathcal{S}_{\odot D, R}} \|A\Theta-b\|_2, \end{align*} where $\mathcal{S}_{\odot D, R}$ is the set of $\Theta$ for which $\Theta = \sum_{r=1}^{R} \theta_1^{(r)} \circ \cdots \circ \theta_D^{(r)}$ for vectors $\theta_d^{(r)} \in \mathbb{R}^{p_d}$ for all $r \in [R]$ and $d \in [D]$, and $\circ$ denotes the outer product of vectors.
no code implementations • 2 Sep 2017 • Jineng Ren, Jarvis Haupt
We propose a communicationally and computationally efficient algorithm for high-dimensional distributed sparse learning.
no code implementations • 29 Aug 2017 • Mojtaba Kadkhodaie Elyaderani, Swayambhoo Jain, Jeffrey Druce, Stefano Gonella, Jarvis Haupt
This paper considers the problem of estimating an unknown high dimensional signal from noisy linear measurements, {when} the signal is assumed to possess a \emph{group-sparse} structure in a {known,} fixed dictionary.
no code implementations • 19 Jun 2017 • Xingguo Li, Lin F. Yang, Jason Ge, Jarvis Haupt, Tong Zhang, Tuo Zhao
We propose a DC proximal Newton algorithm for solving nonconvex regularized sparse learning problems in high dimensions.
no code implementations • 8 Apr 2017 • Swayambhoo Jain, Alexander Gutierrez, Jarvis Haupt
In this paper we study the problem of noisy tensor completion for tensors that admit a canonical polyadic or CANDECOMP/PARAFAC (CP) decomposition with one of the factors being sparse.
no code implementations • 29 Dec 2016 • Xingguo Li, Junwei Lu, Raman Arora, Jarvis Haupt, Han Liu, Zhaoran Wang, Tuo Zhao
We propose a general theory for studying the \xl{landscape} of nonconvex \xl{optimization} with underlying symmetric structures \tz{for a class of machine learning problems (e. g., low-rank matrix factorization, phase retrieval, and deep linear neural networks)}.
no code implementations • 7 Dec 2016 • Xingguo Li, Jarvis Haupt
This paper examines the problem of locating outlier columns in a large, otherwise low-rank matrix, in settings where {}{the data} are noisy, or where the overall matrix has missing elements.
no code implementations • 25 May 2016 • Xingguo Li, Haoming Jiang, Jarvis Haupt, Raman Arora, Han Liu, Mingyi Hong, Tuo Zhao
Many machine learning techniques sacrifice convenient computational structures to gain estimation robustness and modeling flexibility.
no code implementations • 9 May 2016 • Xingguo Li, Raman Arora, Han Liu, Jarvis Haupt, Tuo Zhao
We propose a stochastic variance reduced optimization algorithm for solving sparse learning problems with cardinality constraints.
no code implementations • 24 Feb 2016 • Swayambhoo Jain, Urvashi Oswal, Kevin S. Xu, Brian Eriksson, Jarvis Haupt
The measurement and analysis of Electrodermal Activity (EDA) offers applications in diverse areas ranging from market research, to seizure detection, to human stress analysis.
no code implementations • 25 Feb 2015 • Swayambhoo Jain, Jarvis Haupt
In this paper, we examine the problem of approximating a general linear dimensionality reduction (LDR) operator, represented as a matrix $A \in \mathbb{R}^{m \times n}$ with $m < n$, by a partial circulant matrix with rows related by circular shifts.
no code implementations • 2 Nov 2014 • Akshay Soni, Swayambhoo Jain, Jarvis Haupt, Stefano Gonella
This paper examines a general class of noisy matrix completion tasks where the goal is to estimate a matrix from observations obtained at a subset of its entries, each of which is subject to random noise or corruption.
no code implementations • 1 Jul 2014 • Xingguo Li, Jarvis Haupt
This paper examines the problem of locating outlier columns in a large, otherwise low-rank, matrix.
no code implementations • 21 Nov 2013 • Swayambhoo Jain, Akshay Soni, Jarvis Haupt
This work considers an estimation task in compressive sensing, where the goal is to estimate an unknown signal from compressive measurements that are corrupted by additive pre-measurement noise (interference, or clutter) as well as post-measurement noise, in the specific setting where some (perhaps limited) prior knowledge on the signal, interference, and noise is available.
no code implementations • 18 Jun 2013 • Akshay Soni, Jarvis Haupt
Recent breakthrough results in compressive sensing (CS) have established that many high dimensional signals can be accurately recovered from a relatively small number of non-adaptive linear observations, provided that the signals possess a sparse representation in some basis.