Dictionary Learning

153 papers with code • 0 benchmarks • 6 datasets

Dictionary Learning is an important problem in multiple areas, ranging from computational neuroscience, machine learning, to computer vision and image processing. The general goal is to find a good basis for given data. More formally, in the Dictionary Learning problem, also known as sparse coding, we are given samples of a random vector $y\in\mathbb{R}^n$, of the form $y=Ax$ where $A$ is some unknown matrix in $\mathbb{R}^{n×m}$, called dictionary, and $x$ is sampled from an unknown distribution over sparse vectors. The goal is to approximately recover the dictionary $A$.

Source: Polynomial-time tensor decompositions with sum-of-squares

Libraries

Use these libraries to find Dictionary Learning models and implementations

Most implemented papers

Finding a sparse vector in a subspace: Linear sparsity using alternating directions

sunju/psv NeurIPS 2014

In this paper, we focus on a **planted sparse model** for the subspace: the target sparse vector is embedded in an otherwise random subspace.

Deep Roto-Translation Scattering for Object Classification

ftramer/Handcrafted-DP CVPR 2015

Dictionary learning algorithms or supervised deep convolution networks have considerably improved the efficiency of predefined feature representations such as SIFT.

Unsupervised Feature Learning for Dense Correspondences across Scenes

chhshen/ufl 4 Jan 2015

We experimentally demonstrate that the learned features, together with our matching model, outperforms state-of-the-art methods such as the SIFT flow, coherency sensitive hashing and the recent deformable spatial pyramid matching methods both in terms of accuracy and computation efficiency.

Multimodal Task-Driven Dictionary Learning for Image Classification

soheilb/multimodal_dictionary_learning 4 Feb 2015

Dictionary learning algorithms have been successfully used for both reconstructive and discriminative tasks, where an input signal is represented with a sparse linear combination of dictionary atoms.

Convergence radius and sample complexity of ITKM algorithms for dictionary learning

cristian-rusu-research/ADL-TOOLBOX 24 Mar 2015

In this work we show that iterative thresholding and K-means (ITKM) algorithms can recover a generating dictionary with K atoms from noisy $S$ sparse signals up to an error $\tilde \varepsilon$ as long as the initialisation is within a convergence radius, that is up to a $\log K$ factor inversely proportional to the dynamic range of the signals, and the sample size is proportional to $K \log K \tilde \varepsilon^{-2}$.

Complete Dictionary Recovery over the Sphere

sunju/dl_focm 26 Apr 2015

We consider the problem of recovering a complete (i. e., square and invertible) matrix $\mathbf A_0$, from $\mathbf Y \in \mathbb R^{n \times p}$ with $\mathbf Y = \mathbf A_0 \mathbf X_0$, provided $\mathbf X_0$ is sufficiently sparse.

Linearized Kernel Dictionary Learning

hilikliming/LKDL-Method 18 Sep 2015

In this paper we present a new approach of incorporating kernels into dictionary learning.

Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems

guanhuaw/mirtorch 19 Nov 2015

This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns.

Dictionary Learning for Massive Matrix Factorization

arthurmensch/modl 3 May 2016

Sparse matrix factorization is a popular tool to obtain interpretable data decompositions, which are also effective to perform data completion or denoising.

Binary Pattern Dictionary Learning for Gene Expression Representation in Drosophila Imaginal Discs.

Borda/pyBPDL Asian Conference on Computer Vision 2016

The key part of our work is a binary pattern dictionary learning algorithm, that takes a set of binary images and determines a set of patterns, which can be used to represent the input images with a small error.