Tensor Decomposition
128 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Tensor Decomposition
Libraries
Use these libraries to find Tensor Decomposition models and implementationsMost implemented papers
Bayesian Sparse Tucker Models for Dimension Reduction and Tensor Completion
Tucker decomposition is the cornerstone of modern machine learning on tensorial data analysis, which have attracted considerable attention for multiway feature extraction, compressive sensing, and tensor completion.
Tensor Ring Decomposition
In this paper, we introduce a fundamental tensor decomposition model to represent a large dimensional tensor by a circular multilinear products over a sequence of low dimensional cores, which can be graphically interpreted as a cyclic interconnection of 3rd-order tensors, and thus termed as tensor ring (TR) decomposition.
Shape Constrained Tensor Decompositions using Sparse Representations in Over-Complete Libraries
We consider $N$-way data arrays and low-rank tensor factorizations where the time mode is coded as a sparse linear combination of temporal elements from an over-complete library.
Sublinear Time Orthogonal Tensor Decomposition
We show in a number of cases one can achieve the same theoretical guarantees in sublinear time, i. e., even without reading most of the input tensor.
Introduction to Tensor Decompositions and their Applications in Machine Learning
Tensors are multidimensional arrays of numerical values and therefore generalize matrices to multiple dimensions.
Fitting Low-Rank Tensors in Constant Time
Then, we show that the residual error of the Tucker decomposition of $\tilde{X}$ is sufficiently close to that of $X$ with high probability.
Legendre Decomposition for Tensors
We present a novel nonnegative tensor decomposition method, called Legendre decomposition, which factorizes an input tensor into a multiplicative combination of parameters.
CoVeR: Learning Covariate-Specific Vector Representations with Tensor Decompositions
However, in addition to the text data itself, we often have additional covariates associated with individual corpus documents---e. g. the demographic of the author, time and venue of publication---and we would like the embedding to naturally capture this information.
Learning Binary Latent Variable Models: A Tensor Eigenpair Approach
Latent variable models with hidden binary units appear in various applications.
Tensor Decomposition for Compressing Recurrent Neural Network
In the machine learning fields, Recurrent Neural Network (RNN) has become a popular architecture for sequential data modeling.