Search Results for author: Laura Balzano

Found 26 papers, 6 papers with code

Preference modelling with context-dependent salient features

no code implementations ICML 2020 Amanda Bower, Laura Balzano

Finally we demonstrate the strong performance of maximum likelihood estimation of our model on both synthetic data and two real data sets: the UT Zappos50K data set and comparison data about the compactness of legislative districts in the United States.

Certainty Equivalent Quadratic Control for Markov Jump Systems

no code implementations26 May 2021 Zhe Du, Yahya Sattar, Davoud Ataee Tarzanagh, Laura Balzano, Samet Oymak, Necmiye Ozay

Real-world control applications often involve complex dynamics subject to abrupt changes or variations.

Supervised PCA: A Multiobjective Approach

no code implementations10 Nov 2020 Alexander Ritchie, Laura Balzano, Daniel Kessler, Chandra S. Sripada, Clayton Scott

Methods for supervised principal component analysis (SPCA) aim to incorporate label information into principal component analysis (PCA), so that the extracted features are more useful for a prediction task of interest.

Preference Modeling with Context-Dependent Salient Features

1 code implementation22 Feb 2020 Amanda Bower, Laura Balzano

Finally we demonstrate strong performance of maximum likelihood estimation of our model on both synthetic data and two real data sets: the UT Zappos50K data set and comparison data about the compactness of legislative districts in the US.

Grassmannian Optimization for Online Tensor Completion and Tracking with the t-SVD

1 code implementation30 Jan 2020 Kyle Gilman, Laura Balzano

We propose a new fast streaming algorithm for the tensor completion problem of imputing missing entries of a low-tubal-rank tensor using the tensor singular value decomposition (t-SVD) algebraic framework.

Online matrix factorization for Markovian data and applications to Network Dictionary Learning

1 code implementation5 Nov 2019 Hanbaek Lyu, Deanna Needell, Laura Balzano

As the main application, by combining online non-negative matrix factorization and a recent MCMC algorithm for sampling motifs from networks, we propose a novel framework of Network Dictionary Learning, which extracts ``network dictionary patches' from a given network in an online manner that encodes main features of the network.

Denoising Dictionary Learning

INFORMATION MAXIMIZATION AUTO-ENCODING

no code implementations ICLR 2019 Dejiao Zhang, Tianchen Zhao, Laura Balzano

Unlike the Variational Autoencoder framework, IMAE starts from a stochastic encoder that seeks to map each input data to a hybrid discrete and continuous representation with the objective of maximizing the mutual information between the data and their representations.

Streaming PCA and Subspace Tracking: The Missing Data Case

no code implementations12 Jun 2018 Laura Balzano, Yuejie Chi, Yue M. Lu

This survey article reviews a variety of classical and recent algorithms for solving this problem with low computational and memory complexities, particularly those applicable in the big data regime with missing data.

Decision Making

Tensor Methods for Nonlinear Matrix Completion

no code implementations26 Apr 2018 Greg Ongie, Daniel Pimentel-Alarcón, Laura Balzano, Rebecca Willett, Robert D. Nowak

This approach will succeed in many cases where traditional LRMC is guaranteed to fail because the data are low-rank in the tensorized representation but not in the original representation.

Low-Rank Matrix Completion

LEARNING TO SHARE: SIMULTANEOUS PARAMETER TYING AND SPARSIFICATION IN DEEP LEARNING

no code implementations ICLR 2018 Dejiao Zhang, Haozhu Wang, Mario Figueiredo, Laura Balzano

This has motivated a large body of work to reduce the complexity of the neural network by using sparsity-inducing regularizers.

Deep Unsupervised Clustering Using Mixture of Autoencoders

1 code implementation21 Dec 2017 Dejiao Zhang, Yifan Sun, Brian Eriksson, Laura Balzano

Unsupervised clustering is one of the most fundamental challenges in machine learning.

Subspace Clustering using Ensembles of $K$-Subspaces

no code implementations14 Sep 2017 John Lipor, David Hong, Yan Shuo Tan, Laura Balzano

We present a novel geometric approach to the subspace clustering problem that leverages ensembles of the K-subspaces (KSS) algorithm via the evidence accumulation clustering framework.

Algebraic Variety Models for High-Rank Matrix Completion

no code implementations ICML 2017 Greg Ongie, Rebecca Willett, Robert D. Nowak, Laura Balzano

We consider a generalization of low-rank matrix completion to the case where the data belongs to an algebraic variety, i. e. each data point is a solution to a system of polynomial equations.

Low-Rank Matrix Completion

Real-Time Energy Disaggregation of a Distribution Feeder's Demand Using Online Learning

no code implementations16 Jan 2017 Gregory S. Ledva, Laura Balzano, Johanna L. Mathieu

We use an online learning algorithm, Dynamic Fixed Share (DFS), that uses the real-time distribution feeder measurements as well as models generated from historical building- and device-level data.

Convergence of a Grassmannian Gradient Descent Algorithm for Subspace Estimation From Undersampled Data

no code implementations1 Oct 2016 Dejiao Zhang, Laura Balzano

We study two sampling cases: where each data vector of the streaming matrix is fully sampled, or where it is undersampled by a sampling matrix $A_t\in \mathbb{R}^{m\times n}$ with $m\ll n$.

Leveraging Union of Subspace Structure to Improve Constrained Clustering

no code implementations ICML 2017 John Lipor, Laura Balzano

We demonstrate on several datasets that our algorithm drives the clustering error down considerably faster than the state-of-the-art active query algorithms on datasets with subspace structure and is competitive on other datasets.

On Learning High Dimensional Structured Single Index Models

no code implementations13 Mar 2016 Nikhil Rao, Ravi Ganti, Laura Balzano, Rebecca Willett, Robert Nowak

Single Index Models (SIMs) are simple yet flexible semi-parametric models for machine learning, where the response variable is modeled as a monotonic function of a linear combination of features.

Matrix Completion Under Monotonic Single Index Models

no code implementations NeurIPS 2015 Ravi Ganti, Laura Balzano, Rebecca Willett

Most recent results in matrix completion assume that the matrix under consideration is low-rank or that the columns are in a union of low-rank subspaces.

Matrix Completion

Distance-Penalized Active Learning Using Quantile Search

no code implementations28 Sep 2015 John Lipor, Brandon Wong, Donald Scavia, Branko Kerkez, Laura Balzano

Adaptive sampling theory has shown that, with proper assumptions on the signal class, algorithms exist to reconstruct a signal in $\mathbb{R}^{d}$ with an optimal number of samples.

Active Learning

Global Convergence of a Grassmannian Gradient Descent Algorithm for Subspace Estimation

no code implementations24 Jun 2015 Dejiao Zhang, Laura Balzano

It has been observed in a variety of contexts that gradient descent methods have great success in solving low-rank matrix factorization problems, despite the relevant problem formulation being non-convex.

Online Algorithms for Factorization-Based Structure from Motion

no code implementations26 Sep 2013 Ryan Kennedy, Laura Balzano, Stephen J. Wright, Camillo J. Taylor

We present a family of online algorithms for real-time factorization-based structure from motion, leveraging a relationship between incremental singular value decomposition and recently proposed methods for online matrix completion.

Matrix Completion Structure from Motion

On GROUSE and Incremental SVD

no code implementations21 Jul 2013 Laura Balzano, Stephen J. Wright

GROUSE (Grassmannian Rank-One Update Subspace Estimation) is an incremental algorithm for identifying a subspace of Rn from a sequence of vectors in this subspace, where only a subset of components of each vector is revealed at each iteration.

Iterative Grassmannian Optimization for Robust Image Alignment

no code implementations3 Jun 2013 Jun He, Dejiao Zhang, Laura Balzano, Tao Tao

t-GRASTA iteratively performs incremental gradient descent constrained to the Grassmann manifold of subspaces in order to simultaneously estimate a decomposition of a collection of images into a low-rank subspace, a sparse part of occlusions and foreground objects, and a transformation such as rotation or translation of the image.

Face Recognition

Online Robust Subspace Tracking from Partial Information

1 code implementation18 Sep 2011 Jun He, Laura Balzano, John C. S. Lui

This paper presents GRASTA (Grassmannian Robust Adaptive Subspace Tracking Algorithm), an efficient and robust online algorithm for tracking subspaces from highly incomplete information.

Matrix Completion

Online Identification and Tracking of Subspaces from Highly Incomplete Information

1 code implementation21 Jun 2010 Laura Balzano, Robert Nowak, Benjamin Recht

GROUSE performs exceptionally well in practice both in tracking subspaces and as an online algorithm for matrix completion.

Matrix Completion

Cannot find the paper you are looking for? You can Submit a new open access paper.