Search Results for author: Laura Balzano

Found 40 papers, 15 papers with code

Online Identification and Tracking of Subspaces from Highly Incomplete Information

1 code implementation21 Jun 2010 Laura Balzano, Robert Nowak, Benjamin Recht

GROUSE performs exceptionally well in practice both in tracking subspaces and as an online algorithm for matrix completion.

Matrix Completion

Online Robust Subspace Tracking from Partial Information

1 code implementation18 Sep 2011 Jun He, Laura Balzano, John C. S. Lui

This paper presents GRASTA (Grassmannian Robust Adaptive Subspace Tracking Algorithm), an efficient and robust online algorithm for tracking subspaces from highly incomplete information.

Matrix Completion

LEARNING TO SHARE: SIMULTANEOUS PARAMETER TYING AND SPARSIFICATION IN DEEP LEARNING

1 code implementation ICLR 2018 Dejiao Zhang, Haozhu Wang, Mario Figueiredo, Laura Balzano

This has motivated a large body of work to reduce the complexity of the neural network by using sparsity-inducing regularizers.

Online matrix factorization for Markovian data and applications to Network Dictionary Learning

1 code implementation5 Nov 2019 Hanbaek Lyu, Deanna Needell, Laura Balzano

As the main application, by combining online non-negative matrix factorization and a recent MCMC algorithm for sampling motifs from networks, we propose a novel framework of Network Dictionary Learning, which extracts ``network dictionary patches' from a given network in an online manner that encodes main features of the network.

Denoising Dictionary Learning

Grassmannian Optimization for Online Tensor Completion and Tracking with the t-SVD

1 code implementation30 Jan 2020 Kyle Gilman, Davoud Ataee Tarzanagh, Laura Balzano

We propose a new fast streaming algorithm for the tensor completion problem of imputing missing entries of a low-tubal-rank tensor using the tensor singular value decomposition (t-SVD) algebraic framework.

Deep Unsupervised Clustering Using Mixture of Autoencoders

1 code implementation21 Dec 2017 Dejiao Zhang, Yifan Sun, Brian Eriksson, Laura Balzano

Unsupervised clustering is one of the most fundamental challenges in machine learning.

Clustering

The Law of Parsimony in Gradient Descent for Learning Deep Linear Networks

1 code implementation1 Jun 2023 Can Yaras, Peng Wang, Wei Hu, Zhihui Zhu, Laura Balzano, Qing Qu

Second, it allows us to better understand deep representation learning by elucidating the linear progressive separation and concentration of representations from shallow to deep layers.

Representation Learning

Algebraic Variety Models for High-Rank Matrix Completion

1 code implementation ICML 2017 Greg Ongie, Rebecca Willett, Robert D. Nowak, Laura Balzano

We consider a generalization of low-rank matrix completion to the case where the data belongs to an algebraic variety, i. e. each data point is a solution to a system of polynomial equations.

Clustering Low-Rank Matrix Completion +1

Online Bilevel Optimization: Regret Analysis of Online Alternating Gradient Methods

1 code implementation6 Jul 2022 Davoud Ataee Tarzanagh, Parvin Nazari, BoJian Hou, Li Shen, Laura Balzano

This paper introduces \textit{online bilevel optimization} in which a sequence of time-varying bilevel problems is revealed one after the other.

Bilevel Optimization

Neural Collapse with Normalized Features: A Geometric Analysis over the Riemannian Manifold

1 code implementation19 Sep 2022 Can Yaras, Peng Wang, Zhihui Zhu, Laura Balzano, Qing Qu

When training overparameterized deep networks for classification tasks, it has been widely observed that the learned features exhibit a so-called "neural collapse" phenomenon.

Multi-class Classification Representation Learning +1

Understanding Deep Representation Learning via Layerwise Feature Compression and Discrimination

1 code implementation6 Nov 2023 Peng Wang, Xiao Li, Can Yaras, Zhihui Zhu, Laura Balzano, Wei Hu, Qing Qu

To the best of our knowledge, this is the first quantitative characterization of feature evolution in hierarchical representations of deep linear networks.

Feature Compression Multi-class Classification +2

Preference Modeling with Context-Dependent Salient Features

1 code implementation22 Feb 2020 Amanda Bower, Laura Balzano

Finally we demonstrate strong performance of maximum likelihood estimation of our model on both synthetic data and two real data sets: the UT Zappos50K data set and comparison data about the compactness of legislative districts in the US.

Convergence and Recovery Guarantees of the K-Subspaces Method for Subspace Clustering

1 code implementation11 Jun 2022 Peng Wang, Huikang Liu, Anthony Man-Cho So, Laura Balzano

The K-subspaces (KSS) method is a generalization of the K-means method for subspace clustering.

Clustering

Streaming PCA and Subspace Tracking: The Missing Data Case

no code implementations12 Jun 2018 Laura Balzano, Yuejie Chi, Yue M. Lu

This survey article reviews a variety of classical and recent algorithms for solving this problem with low computational and memory complexities, particularly those applicable in the big data regime with missing data.

Decision Making

Real-Time Energy Disaggregation of a Distribution Feeder's Demand Using Online Learning

no code implementations16 Jan 2017 Gregory S. Ledva, Laura Balzano, Johanna L. Mathieu

We use an online learning algorithm, Dynamic Fixed Share (DFS), that uses the real-time distribution feeder measurements as well as models generated from historical building- and device-level data.

Tensor Methods for Nonlinear Matrix Completion

no code implementations26 Apr 2018 Greg Ongie, Daniel Pimentel-Alarcón, Laura Balzano, Rebecca Willett, Robert D. Nowak

This approach will succeed in many cases where traditional LRMC is guaranteed to fail because the data are low-rank in the tensorized representation but not in the original representation.

Low-Rank Matrix Completion

Subspace Clustering using Ensembles of $K$-Subspaces

no code implementations14 Sep 2017 John Lipor, David Hong, Yan Shuo Tan, Laura Balzano

We present a novel geometric approach to the subspace clustering problem that leverages ensembles of the K-subspaces (KSS) algorithm via the evidence accumulation clustering framework.

Clustering

Leveraging Union of Subspace Structure to Improve Constrained Clustering

no code implementations ICML 2017 John Lipor, Laura Balzano

We demonstrate on several datasets that our algorithm drives the clustering error down considerably faster than the state-of-the-art active query algorithms on datasets with subspace structure and is competitive on other datasets.

Constrained Clustering

Distance-Penalized Active Learning Using Quantile Search

no code implementations28 Sep 2015 John Lipor, Brandon Wong, Donald Scavia, Branko Kerkez, Laura Balzano

Adaptive sampling theory has shown that, with proper assumptions on the signal class, algorithms exist to reconstruct a signal in $\mathbb{R}^{d}$ with an optimal number of samples.

Active Learning

On Learning High Dimensional Structured Single Index Models

no code implementations13 Mar 2016 Nikhil Rao, Ravi Ganti, Laura Balzano, Rebecca Willett, Robert Nowak

Single Index Models (SIMs) are simple yet flexible semi-parametric models for machine learning, where the response variable is modeled as a monotonic function of a linear combination of features.

Vocal Bursts Intensity Prediction

Convergence of a Grassmannian Gradient Descent Algorithm for Subspace Estimation From Undersampled Data

no code implementations1 Oct 2016 Dejiao Zhang, Laura Balzano

We study two sampling cases: where each data vector of the streaming matrix is fully sampled, or where it is undersampled by a sampling matrix $A_t\in \mathbb{R}^{m\times n}$ with $m\ll n$.

Online Algorithms for Factorization-Based Structure from Motion

no code implementations26 Sep 2013 Ryan Kennedy, Laura Balzano, Stephen J. Wright, Camillo J. Taylor

We present a family of online algorithms for real-time factorization-based structure from motion, leveraging a relationship between incremental singular value decomposition and recently proposed methods for online matrix completion.

Matrix Completion

Global Convergence of a Grassmannian Gradient Descent Algorithm for Subspace Estimation

no code implementations24 Jun 2015 Dejiao Zhang, Laura Balzano

It has been observed in a variety of contexts that gradient descent methods have great success in solving low-rank matrix factorization problems, despite the relevant problem formulation being non-convex.

Matrix Completion Under Monotonic Single Index Models

no code implementations NeurIPS 2015 Ravi Ganti, Laura Balzano, Rebecca Willett

Most recent results in matrix completion assume that the matrix under consideration is low-rank or that the columns are in a union of low-rank subspaces.

Matrix Completion

On GROUSE and Incremental SVD

no code implementations21 Jul 2013 Laura Balzano, Stephen J. Wright

GROUSE (Grassmannian Rank-One Update Subspace Estimation) is an incremental algorithm for identifying a subspace of Rn from a sequence of vectors in this subspace, where only a subset of components of each vector is revealed at each iteration.

Iterative Grassmannian Optimization for Robust Image Alignment

no code implementations3 Jun 2013 Jun He, Dejiao Zhang, Laura Balzano, Tao Tao

t-GRASTA iteratively performs incremental gradient descent constrained to the Grassmann manifold of subspaces in order to simultaneously estimate a decomposition of a collection of images into a low-rank subspace, a sparse part of occlusions and foreground objects, and a transformation such as rotation or translation of the image.

Face Recognition

INFORMATION MAXIMIZATION AUTO-ENCODING

no code implementations ICLR 2019 Dejiao Zhang, Tianchen Zhao, Laura Balzano

Unlike the Variational Autoencoder framework, IMAE starts from a stochastic encoder that seeks to map each input data to a hybrid discrete and continuous representation with the objective of maximizing the mutual information between the data and their representations.

Disentanglement Informativeness +1

Preference modelling with context-dependent salient features

no code implementations ICML 2020 Amanda Bower, Laura Balzano

Finally we demonstrate the strong performance of maximum likelihood estimation of our model on both synthetic data and two real data sets: the UT Zappos50K data set and comparison data about the compactness of legislative districts in the United States.

Supervised PCA: A Multiobjective Approach

no code implementations10 Nov 2020 Alexander Ritchie, Laura Balzano, Daniel Kessler, Chandra S. Sripada, Clayton Scott

Methods for supervised principal component analysis (SPCA) aim to incorporate label information into principal component analysis (PCA), so that the extracted features are more useful for a prediction task of interest.

Certainty Equivalent Quadratic Control for Markov Jump Systems

no code implementations26 May 2021 Zhe Du, Yahya Sattar, Davoud Ataee Tarzanagh, Laura Balzano, Samet Oymak, Necmiye Ozay

Real-world control applications often involve complex dynamics subject to abrupt changes or variations.

Identification and Adaptive Control of Markov Jump Systems: Sample Complexity and Regret Bounds

no code implementations13 Nov 2021 Yahya Sattar, Zhe Du, Davoud Ataee Tarzanagh, Laura Balzano, Necmiye Ozay, Samet Oymak

Combining our sample complexity results with recent perturbation results for certainty equivalent control, we prove that when the episode lengths are appropriately chosen, the proposed adaptive control scheme achieves $\mathcal{O}(\sqrt{T})$ regret, which can be improved to $\mathcal{O}(polylog(T))$ with partial knowledge of the system.

Fair Community Detection and Structure Learning in Heterogeneous Graphical Models

no code implementations9 Dec 2021 Davoud Ataee Tarzanagh, Laura Balzano, Alfred O. Hero

In particular, we assume there is some community or clustering structure in the true underlying graph, and we seek to learn a sparse undirected graph and its communities from the data such that demographic groups are fairly represented within the communities.

Community Detection Fairness +1

Mode Reduction for Markov Jump Systems

no code implementations5 May 2022 Zhe Du, Laura Balzano, Necmiye Ozay

Switched systems are capable of modeling processes with underlying dynamics that may change abruptly over time.

HeMPPCAT: Mixtures of Probabilistic Principal Component Analysers for Data with Heteroscedastic Noise

no code implementations21 Jan 2023 Alec S. Xu, Laura Balzano, Jeffrey A. Fessler

Mixtures of probabilistic principal component analysis (MPPCA) is a well-known mixture model extension of principal component analysis (PCA).

Clustering

Dynamic Subspace Estimation with Grassmannian Geodesics

no code implementations26 Mar 2023 Cameron J. Blocker, Haroon Raja, Jeffrey A. Fessler, Laura Balzano

We propose a novel algorithm for minimizing this objective and estimating the parameters of the model from data with Grassmannian-constrained optimization.

ALPCAH: Sample-wise Heteroscedastic PCA with Tail Singular Value Regularization

1 code implementation6 Jul 2023 Javier Salazar Cavazos, Jeffrey A. Fessler, Laura Balzano

Other methods such as Weighted PCA (WPCA) assume the noise variances are known, which may be difficult to know in practice.

Dimensionality Reduction

Streaming Probabilistic PCA for Missing Data with Heteroscedastic Noise

no code implementations10 Oct 2023 Kyle Gilman, David Hong, Jeffrey A. Fessler, Laura Balzano

Streaming principal component analysis (PCA) is an integral tool in large-scale machine learning for rapidly estimating low-dimensional subspaces of very high dimensional and high arrival-rate data with missing entries and corrupting noise.

Astronomy

Convergence and complexity of block majorization-minimization for constrained block-Riemannian optimization

no code implementations16 Dec 2023 Yuchen Li, Laura Balzano, Deanna Needell, Hanbaek Lyu

Block majorization-minimization (BMM) is a simple iterative algorithm for nonconvex optimization that sequentially minimizes a majorizing surrogate of the objective function in each block coordinate while the other block coordinates are held fixed.

Dictionary Learning Riemannian optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.