Search Results for author: Majid Janzamin

Found 10 papers, 0 papers with code

Beating the Perils of Non-Convexity: Guaranteed Training of Neural Networks using Tensor Methods

no code implementations28 Jun 2015 Majid Janzamin, Hanie Sedghi, Anima Anandkumar

We propose a novel algorithm based on tensor decomposition for guaranteed training of two-layer neural networks.

Tensor Decomposition

Score Function Features for Discriminative Learning

no code implementations19 Dec 2014 Majid Janzamin, Hanie Sedghi, Anima Anandkumar

In this paper, we consider a novel class of matrix and tensor-valued features, which can be pre-trained using unlabeled samples.

Score Function Features for Discriminative Learning: Matrix and Tensor Framework

no code implementations9 Dec 2014 Majid Janzamin, Hanie Sedghi, Anima Anandkumar

In this paper, we consider a novel class of matrix and tensor-valued features, which can be pre-trained using unlabeled samples.

Provable Tensor Methods for Learning Mixtures of Generalized Linear Models

no code implementations9 Dec 2014 Hanie Sedghi, Majid Janzamin, Anima Anandkumar

In contrast, we present a tensor decomposition method which is guaranteed to correctly recover the parameters.

General Classification Tensor Decomposition

Analyzing Tensor Power Method Dynamics in Overcomplete Regime

no code implementations6 Nov 2014 Anima Anandkumar, Rong Ge, Majid Janzamin

We present a novel analysis of the dynamics of tensor power iterations in the overcomplete regime where the tensor CP rank is larger than the input dimension.

Sample Complexity Analysis for Learning Overcomplete Latent Variable Models through Tensor Methods

no code implementations3 Aug 2014 Animashree Anandkumar, Rong Ge, Majid Janzamin

In the unsupervised setting, we use a simple initialization algorithm based on SVD of the tensor slices, and provide guarantees under the stricter condition that $k\le \beta d$ (where constant $\beta$ can be larger than $1$), where the tensor method recovers the components under a polynomial running time (and exponential in $\beta$).

Tensor Decomposition

Guaranteed Non-Orthogonal Tensor Decomposition via Alternating Rank-$1$ Updates

no code implementations21 Feb 2014 Animashree Anandkumar, Rong Ge, Majid Janzamin

In this paper, we provide local and global convergence guarantees for recovering CP (Candecomp/Parafac) tensor decomposition.

Tensor Decomposition

When are Overcomplete Topic Models Identifiable? Uniqueness of Tensor Tucker Decompositions with Structured Sparsity

no code implementations NeurIPS 2013 Animashree Anandkumar, Daniel Hsu, Majid Janzamin, Sham Kakade

This set of higher-order expansion conditions allow for overcomplete models, and require the existence of a perfect matching from latent topics to higher order observed words.

Topic Models

High-Dimensional Covariance Decomposition into Sparse Markov and Independence Models

no code implementations5 Nov 2012 Majid Janzamin, Animashree Anandkumar

Fitting high-dimensional data involves a delicate tradeoff between faithful representation and the use of sparse models.

Vocal Bursts Intensity Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.