Search Results for author: Bahareh Tolooshams

Found 14 papers, 5 papers with code

Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood Estimation for Latent Gaussian Models

no code implementations5 Jun 2023 Alexander Lin, Bahareh Tolooshams, Yves Atchadé, Demba Ba

Latent Gaussian models have a rich history in statistics and machine learning, with applications ranging from factor analysis to compressed sensing to time series analysis.

Time Series Time Series Analysis

Unrolled Compressed Blind-Deconvolution

no code implementations28 Sep 2022 Bahareh Tolooshams, Satish Mulleti, Demba Ba, Yonina C. Eldar

To reduce its computational and implementation cost, we propose a compression method that enables blind recovery from much fewer measurements with respect to the full received signal in time.

Stable and Interpretable Unrolled Dictionary Learning

1 code implementation31 May 2021 Bahareh Tolooshams, Demba Ba

The success of dictionary learning relies on access to a "good" initial estimate of the dictionary and the ability of the sparse coding step to provide an unbiased estimate of the code.

Dictionary Learning Image Denoising +1

Gaussian Process Convolutional Dictionary Learning

no code implementations28 Mar 2021 Andrew H. Song, Bahareh Tolooshams, Demba Ba

Convolutional dictionary learning (CDL), the problem of estimating shift-invariant templates from data, is typically conducted in the absence of a prior/structure on the templates.

Dictionary Learning Gaussian Processes

On the convergence of group-sparse autoencoders

no code implementations13 Feb 2021 Emmanouil Theodosis, Bahareh Tolooshams, Pranay Tankala, Abiy Tasissa, Demba Ba

Recent approaches in the theoretical analysis of model-based deep learning architectures have studied the convergence of gradient descent in shallow ReLU networks that arise from generative models whose hidden layers are sparse.

Clustering

Unfolding Neural Networks for Compressive Multichannel Blind Deconvolution

no code implementations22 Oct 2020 Bahareh Tolooshams, Satish Mulleti, Demba Ba, Yonina C. Eldar

We propose a learned-structured unfolding neural network for the problem of compressive sparse multichannel blind-deconvolution.

Towards improving discriminative reconstruction via simultaneous dense and sparse coding

no code implementations16 Jun 2020 Abiy Tasissa, Emmanouil Theodosis, Bahareh Tolooshams, Demba Ba

We propose a novel dense and sparse coding model that integrates both representation capability and discriminative features.

Compressive Sensing Dictionary Learning

RandNet: deep learning with compressed measurements of images

no code implementations25 Aug 2019 Thomas Chang, Bahareh Tolooshams, Demba Ba

We introduce a class of neural networks, termed RandNet, for learning representations using compressed random measurements of data of interest, such as images.

Dictionary Learning

Convolutional Dictionary Learning in Hierarchical Networks

no code implementations23 Jul 2019 Javier Zazo, Bahareh Tolooshams, Demba Ba

Motivated by the empirically observed properties of scale and detail coefficients of images in the wavelet domain, we propose a hierarchical deep generative model of piecewise smooth signals that is a recursion across scales: the low pass scale coefficients at one layer are obtained by filtering the scale coefficients at the next layer, and adding a high pass detail innovation obtained by filtering a sparse vector.

Dictionary Learning

Deep Residual Autoencoders for Expectation Maximization-inspired Dictionary Learning

1 code implementation18 Apr 2019 Bahareh Tolooshams, Sourav Dey, Demba Ba

Specifically, we leverage the interpretation of the alternating-minimization algorithm for dictionary learning as an approximate Expectation-Maximization algorithm to develop autoencoders that enable the simultaneous training of the dictionary and regularization parameter (ReLU bias).

Dictionary Learning Image Denoising

Scalable Convolutional Dictionary Learning with Constrained Recurrent Sparse Auto-encoders

1 code implementation12 Jul 2018 Bahareh Tolooshams, Sourav Dey, Demba Ba

We demonstrate the ability of CRsAE to recover the underlying dictionary and characterize its sensitivity as a function of SNR.

blind source separation Dictionary Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.