Dictionary Learning
153 papers with code • 0 benchmarks • 6 datasets
Dictionary Learning is an important problem in multiple areas, ranging from computational neuroscience, machine learning, to computer vision and image processing. The general goal is to find a good basis for given data. More formally, in the Dictionary Learning problem, also known as sparse coding, we are given samples of a random vector $y\in\mathbb{R}^n$, of the form $y=Ax$ where $A$ is some unknown matrix in $\mathbb{R}^{n×m}$, called dictionary, and $x$ is sampled from an unknown distribution over sparse vectors. The goal is to approximately recover the dictionary $A$.
Source: Polynomial-time tensor decompositions with sum-of-squares
Benchmarks
These leaderboards are used to track progress in Dictionary Learning
Libraries
Use these libraries to find Dictionary Learning models and implementationsMost implemented papers
Finding a sparse vector in a subspace: Linear sparsity using alternating directions
In this paper, we focus on a **planted sparse model** for the subspace: the target sparse vector is embedded in an otherwise random subspace.
Deep Roto-Translation Scattering for Object Classification
Dictionary learning algorithms or supervised deep convolution networks have considerably improved the efficiency of predefined feature representations such as SIFT.
Unsupervised Feature Learning for Dense Correspondences across Scenes
We experimentally demonstrate that the learned features, together with our matching model, outperforms state-of-the-art methods such as the SIFT flow, coherency sensitive hashing and the recent deformable spatial pyramid matching methods both in terms of accuracy and computation efficiency.
Multimodal Task-Driven Dictionary Learning for Image Classification
Dictionary learning algorithms have been successfully used for both reconstructive and discriminative tasks, where an input signal is represented with a sparse linear combination of dictionary atoms.
Convergence radius and sample complexity of ITKM algorithms for dictionary learning
In this work we show that iterative thresholding and K-means (ITKM) algorithms can recover a generating dictionary with K atoms from noisy $S$ sparse signals up to an error $\tilde \varepsilon$ as long as the initialisation is within a convergence radius, that is up to a $\log K$ factor inversely proportional to the dynamic range of the signals, and the sample size is proportional to $K \log K \tilde \varepsilon^{-2}$.
Complete Dictionary Recovery over the Sphere
We consider the problem of recovering a complete (i. e., square and invertible) matrix $\mathbf A_0$, from $\mathbf Y \in \mathbb R^{n \times p}$ with $\mathbf Y = \mathbf A_0 \mathbf X_0$, provided $\mathbf X_0$ is sufficiently sparse.
Linearized Kernel Dictionary Learning
In this paper we present a new approach of incorporating kernels into dictionary learning.
Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems
This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns.
Dictionary Learning for Massive Matrix Factorization
Sparse matrix factorization is a popular tool to obtain interpretable data decompositions, which are also effective to perform data completion or denoising.
Binary Pattern Dictionary Learning for Gene Expression Representation in Drosophila Imaginal Discs.
The key part of our work is a binary pattern dictionary learning algorithm, that takes a set of binary images and determines a set of patterns, which can be used to represent the input images with a small error.