no code implementations • ICML 2020 • Amanda Bower, Laura Balzano
Finally we demonstrate the strong performance of maximum likelihood estimation of our model on both synthetic data and two real data sets: the UT Zappos50K data set and comparison data about the compactness of legislative districts in the United States.
no code implementations • 16 Dec 2023 • Yuchen Li, Laura Balzano, Deanna Needell, Hanbaek Lyu
Block majorization-minimization (BMM) is a simple iterative algorithm for nonconvex optimization that sequentially minimizes a majorizing surrogate of the objective function in each block coordinate while the other block coordinates are held fixed.
1 code implementation • 8 Nov 2023 • Soo Min Kwon, Zekai Zhang, Dogyoon Song, Laura Balzano, Qing Qu
We empirically evaluate the effectiveness of our compression technique on matrix recovery problems.
1 code implementation • 6 Nov 2023 • Peng Wang, Xiao Li, Can Yaras, Zhihui Zhu, Laura Balzano, Wei Hu, Qing Qu
To the best of our knowledge, this is the first quantitative characterization of feature evolution in hierarchical representations of deep linear networks.
no code implementations • 10 Oct 2023 • Kyle Gilman, David Hong, Jeffrey A. Fessler, Laura Balzano
Streaming principal component analysis (PCA) is an integral tool in large-scale machine learning for rapidly estimating low-dimensional subspaces of very high dimensional and high arrival-rate data with missing entries and corrupting noise.
1 code implementation • 6 Jul 2023 • Javier Salazar Cavazos, Jeffrey A. Fessler, Laura Balzano
Other methods such as Weighted PCA (WPCA) assume the noise variances are known, which may be difficult to know in practice.
1 code implementation • 1 Jun 2023 • Can Yaras, Peng Wang, Wei Hu, Zhihui Zhu, Laura Balzano, Qing Qu
Second, it allows us to better understand deep representation learning by elucidating the linear progressive separation and concentration of representations from shallow to deep layers.
no code implementations • 26 Mar 2023 • Cameron J. Blocker, Haroon Raja, Jeffrey A. Fessler, Laura Balzano
We propose a novel algorithm for minimizing this objective and estimating the parameters of the model from data with Grassmannian-constrained optimization.
no code implementations • 21 Jan 2023 • Alec S. Xu, Laura Balzano, Jeffrey A. Fessler
Mixtures of probabilistic principal component analysis (MPPCA) is a well-known mixture model extension of principal component analysis (PCA).
1 code implementation • 19 Sep 2022 • Can Yaras, Peng Wang, Zhihui Zhu, Laura Balzano, Qing Qu
When training overparameterized deep networks for classification tasks, it has been widely observed that the learned features exhibit a so-called "neural collapse" phenomenon.
1 code implementation • 6 Jul 2022 • Davoud Ataee Tarzanagh, Parvin Nazari, BoJian Hou, Li Shen, Laura Balzano
This paper introduces \textit{online bilevel optimization} in which a sequence of time-varying bilevel problems is revealed one after the other.
1 code implementation • 11 Jun 2022 • Peng Wang, Huikang Liu, Anthony Man-Cho So, Laura Balzano
The K-subspaces (KSS) method is a generalization of the K-means method for subspace clustering.
no code implementations • 5 May 2022 • Zhe Du, Laura Balzano, Necmiye Ozay
Switched systems are capable of modeling processes with underlying dynamics that may change abruptly over time.
no code implementations • 9 Dec 2021 • Davoud Ataee Tarzanagh, Laura Balzano, Alfred O. Hero
In particular, we assume there is some community or clustering structure in the true underlying graph, and we seek to learn a sparse undirected graph and its communities from the data such that demographic groups are fairly represented within the communities.
no code implementations • 13 Nov 2021 • Yahya Sattar, Zhe Du, Davoud Ataee Tarzanagh, Laura Balzano, Necmiye Ozay, Samet Oymak
Combining our sample complexity results with recent perturbation results for certainty equivalent control, we prove that when the episode lengths are appropriately chosen, the proposed adaptive control scheme achieves $\mathcal{O}(\sqrt{T})$ regret, which can be improved to $\mathcal{O}(polylog(T))$ with partial knowledge of the system.
no code implementations • 26 May 2021 • Zhe Du, Yahya Sattar, Davoud Ataee Tarzanagh, Laura Balzano, Samet Oymak, Necmiye Ozay
Real-world control applications often involve complex dynamics subject to abrupt changes or variations.
no code implementations • 10 Nov 2020 • Alexander Ritchie, Laura Balzano, Daniel Kessler, Chandra S. Sripada, Clayton Scott
Methods for supervised principal component analysis (SPCA) aim to incorporate label information into principal component analysis (PCA), so that the extracted features are more useful for a prediction task of interest.
1 code implementation • 22 Feb 2020 • Amanda Bower, Laura Balzano
Finally we demonstrate strong performance of maximum likelihood estimation of our model on both synthetic data and two real data sets: the UT Zappos50K data set and comparison data about the compactness of legislative districts in the US.
1 code implementation • 30 Jan 2020 • Kyle Gilman, Davoud Ataee Tarzanagh, Laura Balzano
We propose a new fast streaming algorithm for the tensor completion problem of imputing missing entries of a low-tubal-rank tensor using the tensor singular value decomposition (t-SVD) algebraic framework.
1 code implementation • 5 Nov 2019 • Hanbaek Lyu, Deanna Needell, Laura Balzano
As the main application, by combining online non-negative matrix factorization and a recent MCMC algorithm for sampling motifs from networks, we propose a novel framework of Network Dictionary Learning, which extracts ``network dictionary patches' from a given network in an online manner that encodes main features of the network.
no code implementations • ICLR 2019 • Dejiao Zhang, Tianchen Zhao, Laura Balzano
Unlike the Variational Autoencoder framework, IMAE starts from a stochastic encoder that seeks to map each input data to a hybrid discrete and continuous representation with the objective of maximizing the mutual information between the data and their representations.
no code implementations • 12 Jun 2018 • Laura Balzano, Yuejie Chi, Yue M. Lu
This survey article reviews a variety of classical and recent algorithms for solving this problem with low computational and memory complexities, particularly those applicable in the big data regime with missing data.
no code implementations • 26 Apr 2018 • Greg Ongie, Daniel Pimentel-Alarcón, Laura Balzano, Rebecca Willett, Robert D. Nowak
This approach will succeed in many cases where traditional LRMC is guaranteed to fail because the data are low-rank in the tensorized representation but not in the original representation.
1 code implementation • ICLR 2018 • Dejiao Zhang, Haozhu Wang, Mario Figueiredo, Laura Balzano
This has motivated a large body of work to reduce the complexity of the neural network by using sparsity-inducing regularizers.
1 code implementation • 21 Dec 2017 • Dejiao Zhang, Yifan Sun, Brian Eriksson, Laura Balzano
Unsupervised clustering is one of the most fundamental challenges in machine learning.
no code implementations • 14 Sep 2017 • John Lipor, David Hong, Yan Shuo Tan, Laura Balzano
We present a novel geometric approach to the subspace clustering problem that leverages ensembles of the K-subspaces (KSS) algorithm via the evidence accumulation clustering framework.
1 code implementation • ICML 2017 • Greg Ongie, Rebecca Willett, Robert D. Nowak, Laura Balzano
We consider a generalization of low-rank matrix completion to the case where the data belongs to an algebraic variety, i. e. each data point is a solution to a system of polynomial equations.
no code implementations • 16 Jan 2017 • Gregory S. Ledva, Laura Balzano, Johanna L. Mathieu
We use an online learning algorithm, Dynamic Fixed Share (DFS), that uses the real-time distribution feeder measurements as well as models generated from historical building- and device-level data.
no code implementations • 12 Oct 2016 • David Hong, Laura Balzano, Jeffrey A. Fessler
Principal Component Analysis (PCA) is a method for estimating a subspace given noisy samples.
no code implementations • 1 Oct 2016 • Dejiao Zhang, Laura Balzano
We study two sampling cases: where each data vector of the streaming matrix is fully sampled, or where it is undersampled by a sampling matrix $A_t\in \mathbb{R}^{m\times n}$ with $m\ll n$.
no code implementations • ICML 2017 • John Lipor, Laura Balzano
We demonstrate on several datasets that our algorithm drives the clustering error down considerably faster than the state-of-the-art active query algorithms on datasets with subspace structure and is competitive on other datasets.
no code implementations • 13 Mar 2016 • Nikhil Rao, Ravi Ganti, Laura Balzano, Rebecca Willett, Robert Nowak
Single Index Models (SIMs) are simple yet flexible semi-parametric models for machine learning, where the response variable is modeled as a monotonic function of a linear combination of features.
no code implementations • NeurIPS 2015 • Ravi Ganti, Laura Balzano, Rebecca Willett
Most recent results in matrix completion assume that the matrix under consideration is low-rank or that the columns are in a union of low-rank subspaces.
no code implementations • 28 Sep 2015 • John Lipor, Brandon Wong, Donald Scavia, Branko Kerkez, Laura Balzano
Adaptive sampling theory has shown that, with proper assumptions on the signal class, algorithms exist to reconstruct a signal in $\mathbb{R}^{d}$ with an optimal number of samples.
no code implementations • 24 Jun 2015 • Dejiao Zhang, Laura Balzano
It has been observed in a variety of contexts that gradient descent methods have great success in solving low-rank matrix factorization problems, despite the relevant problem formulation being non-convex.
no code implementations • 26 Sep 2013 • Ryan Kennedy, Laura Balzano, Stephen J. Wright, Camillo J. Taylor
We present a family of online algorithms for real-time factorization-based structure from motion, leveraging a relationship between incremental singular value decomposition and recently proposed methods for online matrix completion.
no code implementations • 21 Jul 2013 • Laura Balzano, Stephen J. Wright
GROUSE (Grassmannian Rank-One Update Subspace Estimation) is an incremental algorithm for identifying a subspace of Rn from a sequence of vectors in this subspace, where only a subset of components of each vector is revealed at each iteration.
no code implementations • 3 Jun 2013 • Jun He, Dejiao Zhang, Laura Balzano, Tao Tao
t-GRASTA iteratively performs incremental gradient descent constrained to the Grassmann manifold of subspaces in order to simultaneously estimate a decomposition of a collection of images into a low-rank subspace, a sparse part of occlusions and foreground objects, and a transformation such as rotation or translation of the image.
1 code implementation • 18 Sep 2011 • Jun He, Laura Balzano, John C. S. Lui
This paper presents GRASTA (Grassmannian Robust Adaptive Subspace Tracking Algorithm), an efficient and robust online algorithm for tracking subspaces from highly incomplete information.
1 code implementation • 21 Jun 2010 • Laura Balzano, Robert Nowak, Benjamin Recht
GROUSE performs exceptionally well in practice both in tracking subspaces and as an online algorithm for matrix completion.