no code implementations • 30 Nov 2023 • Jonathan Huml, Abiy Tasissa, Demba Ba
We propose an autoencoder architecture (WLSC) whose latent representations are implicitly, locally organized for spectral clustering through a Laplacian quadratic form of a bipartite graph, which generates a diverse set of artificial receptive fields that match primate data in V1 as faithfully as recent contrastive frameworks like Local Low Dimensionality, or LLD \citep{lld} that discard sparse dictionary learning.
1 code implementation • 30 Sep 2023 • Alexander Lin, Demba Ba
This paper considers clustered multi-task compressive sensing, a hierarchical model that solves multiple compressive sensing tasks by finding clusters of tasks that leverage shared information to mutually improve signal reconstruction.
no code implementations • 5 Jun 2023 • Alexander Lin, Bahareh Tolooshams, Yves Atchadé, Demba Ba
Latent Gaussian models have a rich history in statistics and machine learning, with applications ranging from factor analysis to compressed sensing to time series analysis.
no code implementations • 29 May 2023 • Emmanouil Theodosis, Karim Helwani, Demba Ba
Employing equivariance in neural networks leads to greater parameter efficiency and improved generalization performance through the encoding of domain knowledge in the architecture; however, the majority of existing approaches require an a priori specification of the desired symmetries.
no code implementations • 22 Feb 2023 • Jonathan Huml, Abiy Tasissa, Demba Ba
The classical sparse coding model represents visual stimuli as a linear combination of a handful of learned basis functions that are Gabor-like when trained on natural image data.
1 code implementation • 16 Nov 2022 • Emmanouil Theodosis, Demba Ba
Deep neural networks lack straightforward ways to incorporate domain knowledge and are notoriously considered black boxes.
no code implementations • 28 Sep 2022 • Bahareh Tolooshams, Satish Mulleti, Demba Ba, Yonina C. Eldar
To reduce its computational and implementation cost, we propose a compression method that enables blind recovery from much fewer measurements with respect to the full received signal in time.
1 code implementation • 25 Feb 2022 • Alexander Lin, Andrew H. Song, Berkin Bilgic, Demba Ba
Sparse Bayesian learning (SBL) is a powerful framework for tackling the sparse coding problem.
1 code implementation • 10 Oct 2021 • Alexander Lin, Andrew H. Song, Demba Ba
State-of-the-art approaches for clustering high-dimensional data utilize deep auto-encoder architectures.
1 code implementation • 31 May 2021 • Bahareh Tolooshams, Demba Ba
The success of dictionary learning relies on access to a "good" initial estimate of the dictionary and the ability of the sparse coding step to provide an unbiased estimate of the code.
no code implementations • 21 May 2021 • Alexander Lin, Andrew H. Song, Berkin Bilgic, Demba Ba
The most popular inference algorithms for SBL exhibit prohibitively large computational costs for high-dimensional problems due to the need to maintain a large covariance matrix.
no code implementations • 28 Apr 2021 • Abiy Tasissa, Pranay Tankala, Demba Ba
Sparse manifold learning algorithms combine techniques in manifold learning and sparse optimization to learn features that could be utilized for downstream tasks.
no code implementations • 28 Mar 2021 • Andrew H. Song, Bahareh Tolooshams, Demba Ba
Convolutional dictionary learning (CDL), the problem of estimating shift-invariant templates from data, is typically conducted in the absence of a prior/structure on the templates.
no code implementations • 13 Feb 2021 • Emmanouil Theodosis, Bahareh Tolooshams, Pranay Tankala, Abiy Tasissa, Demba Ba
Recent approaches in the theoretical analysis of model-based deep learning architectures have studied the convergence of gradient descent in shallow ReLU networks that arise from generative models whose hidden layers are sparse.
1 code implementation • 3 Dec 2020 • Pranay Tankala, Abiy Tasissa, James M. Murphy, Demba Ba
We theoretically analyze the proposed program by relating the weighted $\ell_1$ penalty in KDS to a weighted $\ell_0$ program.
no code implementations • 22 Oct 2020 • Bahareh Tolooshams, Satish Mulleti, Demba Ba, Yonina C. Eldar
We propose a learned-structured unfolding neural network for the problem of compressive sparse multichannel blind-deconvolution.
no code implementations • 16 Jun 2020 • Abiy Tasissa, Emmanouil Theodosis, Bahareh Tolooshams, Demba Ba
We propose a novel dense and sparse coding model that integrates both representation capability and discriminative features.
no code implementations • 25 Aug 2019 • Thomas Chang, Bahareh Tolooshams, Demba Ba
We introduce a class of neural networks, termed RandNet, for learning representations using compressed random measurements of data of interest, such as images.
no code implementations • 23 Jul 2019 • Javier Zazo, Bahareh Tolooshams, Demba Ba
Motivated by the empirically observed properties of scale and detail coefficients of images in the wavelet domain, we propose a hierarchical deep generative model of piecewise smooth signals that is a recursion across scales: the low pass scale coefficients at one layer are obtained by filtering the scale coefficients at the next layer, and adding a high pass detail innovation obtained by filtering a sparse vector.
no code implementations • 22 Jul 2019 • Andrew H. Song, Francisco J. Flores, Demba Ba
Given a continuous-time signal that can be modeled as the superposition of localized, time-shifted events from multiple sources, the goal of Convolutional Dictionary Learning (CDL) is to identify the location of the events--by Convolutional Sparse Coding (CSC)--and learn the template for each source--by Convolutional Dictionary Update (CDU).
1 code implementation • ICML 2020 • Bahareh Tolooshams, Andrew H. Song, Simona Temereanca, Demba Ba
We introduce a class of auto-encoder neural networks tailored to data from the natural exponential family (e. g., count data).
1 code implementation • 18 Apr 2019 • Bahareh Tolooshams, Sourav Dey, Demba Ba
Specifically, we leverage the interpretation of the alternating-minimization algorithm for dictionary learning as an approximate Expectation-Maximization algorithm to develop autoencoders that enable the simultaneous training of the dictionary and regularization parameter (ReLU bias).
no code implementations • 23 Oct 2018 • Alexander Lin, Yingzhuo Zhang, Jeremy Heng, Stephen A. Allsop, Kay M. Tye, Pierre E. Jacob, Demba Ba
We propose a general statistical framework for clustering multiple time series that exhibit nonlinear dynamics into an a-priori-unknown number of sub-groups.
1 code implementation • 12 Jul 2018 • Bahareh Tolooshams, Sourav Dey, Demba Ba
We demonstrate the ability of CRsAE to recover the underlying dictionary and characterize its sensitivity as a function of SNR.
no code implementations • 5 Jul 2018 • Demba Ba
A recent line of work shows that a deep neural network with ReLU nonlinearities arises from a finite sequence of cascaded sparse coding models, the outputs of which, except for the last element in the cascade, are sparse and unobservable.
no code implementations • 18 May 2018 • Leon Chlon, Andrew Song, Sandya Subramanian, Hugo Soulat, John Tauber, Demba Ba, Michael Prerau
Electroencephalographic (EEG) monitoring of neural activity is widely used for sleep disorder diagnostics and research.
no code implementations • NeurIPS 2012 • Demba Ba, Behtash Babadi, Patrick Purdon, Emery Brown
We consider the problem of recovering a sequence of vectors, $(x_k)_{k=0}^K$, for which the increments $x_k-x_{k-1}$ are $S_k$-sparse (with $S_k$ typically smaller than $S_1$), based on linear measurements $(y_k = A_k x_k + e_k)_{k=1}^K$, where $A_k$ and $e_k$ denote the measurement matrix and noise, respectively.