no code implementations • 2 Jan 2025 • Jiayun Wang, Oleksii Ostras, Masashi Sode, Bahareh Tolooshams, Zongyi Li, Kamyar Azizzadenesheli, Gianmarco Pinton, Anima Anandkumar
Lung ultrasound is a growing modality in clinics for diagnosing and monitoring acute and chronic lung diseases due to its low cost and accessibility.
no code implementations • 5 Oct 2024 • Armeet Singh Jatyani, Jiayun Wang, Aditi Chandrashekar, Zihui Wu, Miguel Liu-Schiaffini, Bahareh Tolooshams, Anima Anandkumar
Our unified model offers a versatile solution for MRI, adapting seamlessly to various measurement undersampling and imaging resolutions, making it highly effective for flexible and reliable clinical imaging.
1 code implementation • 4 Oct 2024 • Rayhan Zirvi, Bahareh Tolooshams, Anima Anandkumar
DiffStateGrad, as a module, can be added to a wide range of diffusion-based inverse solvers to improve the preservation of the diffusion process on the prior manifold and filter out artifact-inducing components.
no code implementations • 5 Sep 2024 • Freya Shah, Taylor L. Patti, Julius Berner, Bahareh Tolooshams, Jean Kossaifi, Anima Anandkumar
In this manuscript, we use FNOs to model the evolution of random quantum spin systems, so chosen due to their representative quantum dynamics and minimal symmetry.
no code implementations • 5 Jun 2023 • Alexander Lin, Bahareh Tolooshams, Yves Atchadé, Demba Ba
Latent Gaussian models have a rich history in statistics and machine learning, with applications ranging from factor analysis to compressed sensing to time series analysis.
no code implementations • 28 Sep 2022 • Bahareh Tolooshams, Satish Mulleti, Demba Ba, Yonina C. Eldar
To reduce its computational and implementation cost, we propose a compression method that enables blind recovery from much fewer measurements with respect to the full received signal in time.
no code implementations • 9 Dec 2021 • Bahareh Tolooshams, Kazuhito Koishida
Deep learning-based speech enhancement has shown unprecedented performance in recent years.
1 code implementation • 31 May 2021 • Bahareh Tolooshams, Demba Ba
The success of dictionary learning relies on access to a "good" initial estimate of the dictionary and the ability of the sparse coding step to provide an unbiased estimate of the code.
no code implementations • 28 Mar 2021 • Andrew H. Song, Bahareh Tolooshams, Demba Ba
Convolutional dictionary learning (CDL), the problem of estimating shift-invariant templates from data, is typically conducted in the absence of a prior/structure on the templates.
no code implementations • 13 Feb 2021 • Emmanouil Theodosis, Bahareh Tolooshams, Pranay Tankala, Abiy Tasissa, Demba Ba
Recent approaches in the theoretical analysis of model-based deep learning architectures have studied the convergence of gradient descent in shallow ReLU networks that arise from generative models whose hidden layers are sparse.
no code implementations • 22 Oct 2020 • Bahareh Tolooshams, Satish Mulleti, Demba Ba, Yonina C. Eldar
We propose a learned-structured unfolding neural network for the problem of compressive sparse multichannel blind-deconvolution.
1 code implementation • 16 Jun 2020 • Abiy Tasissa, Emmanouil Theodosis, Bahareh Tolooshams, Demba Ba
We propose a novel dense and sparse coding model that integrates both representation capability and discriminative features.
1 code implementation • 30 Jan 2020 • Bahareh Tolooshams, Ritwik Giri, Andrew H. Song, Umut Isik, Arvindh Krishnaswamy
Supervised deep learning has gained significant attention for speech enhancement recently.
Ranked #2 on
Speech Enhancement
on CHiME-3
no code implementations • 25 Aug 2019 • Thomas Chang, Bahareh Tolooshams, Demba Ba
We introduce a class of neural networks, termed RandNet, for learning representations using compressed random measurements of data of interest, such as images.
no code implementations • 23 Jul 2019 • Javier Zazo, Bahareh Tolooshams, Demba Ba
Motivated by the empirically observed properties of scale and detail coefficients of images in the wavelet domain, we propose a hierarchical deep generative model of piecewise smooth signals that is a recursion across scales: the low pass scale coefficients at one layer are obtained by filtering the scale coefficients at the next layer, and adding a high pass detail innovation obtained by filtering a sparse vector.
1 code implementation • ICML 2020 • Bahareh Tolooshams, Andrew H. Song, Simona Temereanca, Demba Ba
We introduce a class of auto-encoder neural networks tailored to data from the natural exponential family (e. g., count data).
1 code implementation • 18 Apr 2019 • Bahareh Tolooshams, Sourav Dey, Demba Ba
Specifically, we leverage the interpretation of the alternating-minimization algorithm for dictionary learning as an approximate Expectation-Maximization algorithm to develop autoencoders that enable the simultaneous training of the dictionary and regularization parameter (ReLU bias).
1 code implementation • 12 Jul 2018 • Bahareh Tolooshams, Sourav Dey, Demba Ba
We demonstrate the ability of CRsAE to recover the underlying dictionary and characterize its sensitivity as a function of SNR.