Search Results for author: Morteza Mardani

Found 22 papers, 5 papers with code

Unraveling Attention via Convex Duality: Analysis and Interpretations of Vision Transformers

no code implementations17 May 2022 Arda Sahiner, Tolga Ergen, Batu Ozturkler, John Pauly, Morteza Mardani, Mert Pilanci

Vision transformers using self-attention or its proposed alternatives have demonstrated promising results in many image related tasks.

Inductive Bias

Adaptive Fourier Neural Operators: Efficient Token Mixers for Transformers

1 code implementation24 Nov 2021 John Guibas, Morteza Mardani, Zongyi Li, Andrew Tao, Anima Anandkumar, Bryan Catanzaro

AFNO is based on a principled foundation of operator learning which allows us to frame token mixing as a continuous global convolution without any dependence on the input resolution.

Operator learning Representation Learning

Efficient Token Mixing for Transformers via Adaptive Fourier Neural Operators

no code implementations ICLR 2022 John Guibas, Morteza Mardani, Zongyi Li, Andrew Tao, Anima Anandkumar, Bryan Catanzaro

AFNO is based on a principled foundation of operator learning which allows us to frame token mixing as a continuous global convolution without any dependence on the input resolution.

Operator learning Representation Learning

Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form Solutions

1 code implementation ICLR 2022 Arda Sahiner, Tolga Ergen, Batu Ozturkler, Burak Bartan, John Pauly, Morteza Mardani, Mert Pilanci

In this work, we analyze the training of Wasserstein GANs with two-layer neural network discriminators through the lens of convex duality, and for a variety of generators expose the conditions under which Wasserstein GANs can be solved exactly with convex optimization approaches, or can be represented as convex-concave games.

Image Generation

Convex Regularization Behind Neural Reconstruction

no code implementations ICLR 2021 Arda Sahiner, Morteza Mardani, Batu Ozturkler, Mert Pilanci, John Pauly

Neural networks have shown tremendous potential for reconstructing high-resolution images in inverse problems.

Denoising

Neural FFTs for Universal Texture Image Synthesis

no code implementations NeurIPS 2020 Morteza Mardani, Guilin Liu, Aysegul Dundar, Shiqiu Liu, Andrew Tao, Bryan Catanzaro

The conventional CNNs, recently adopted for synthesis, require to train and test on the same set of images and fail to generalize to unseen images.

Image Generation Texture Synthesis

Risk Quantification in Deep MRI Reconstruction

no code implementations23 Oct 2020 Vineet Edupuganti, Morteza Mardani, Shreyas Vasanawala, John M. Pauly

Reliable medical image recovery is crucial for accurate patient diagnoses, but little prior work has centered on quantifying uncertainty when using non-transparent deep learning approaches to reconstruct high-quality images from limited measured data.

MRI Reconstruction

Spectral Decomposition in Deep Networks for Segmentation of Dynamic Medical Images

no code implementations30 Sep 2020 Edgar A. Rios Piedra, Morteza Mardani, Frank Ong, Ukash Nakarmi, Joseph Y. Cheng, Shreyas Vasanawala

Dynamic contrast-enhanced magnetic resonance imaging (DCE- MRI) is a widely used multi-phase technique routinely used in clinical practice.

Wasserstein GANs for MR Imaging: from Paired to Unpaired Training

no code implementations15 Oct 2019 Ke Lei, Morteza Mardani, John M. Pauly, Shreyas S. Vasanawala

The reconstruction networks consist of a generator which suppresses the input image artifacts, and a discriminator using a pool of (unpaired) labels to adjust the reconstruction quality.

Image Reconstruction

Degrees of Freedom Analysis of Unrolled Neural Networks

no code implementations10 Jun 2019 Morteza Mardani, Qingyun Sun, Vardan Papyan, Shreyas Vasanawala, John Pauly, David Donoho

Leveraging the Stein's Unbiased Risk Estimator (SURE), this paper analyzes the generalization risk with its bias and variance components for recurrent unrolled networks.

Image Restoration

Uncertainty Quantification in Deep MRI Reconstruction

no code implementations31 Jan 2019 Vineet Edupuganti, Morteza Mardani, Shreyas Vasanawala, John Pauly

Reliable MRI is crucial for accurate interpretation in therapeutic and diagnostic tasks.

MRI Reconstruction

Neural Proximal Gradient Descent for Compressive Imaging

1 code implementation NeurIPS 2018 Morteza Mardani, Qingyun Sun, Shreyas Vasawanala, Vardan Papyan, Hatef Monajemi, John Pauly, David Donoho

Recovering high-resolution images from limited sensory data typically leads to a serious ill-posed inverse problem, demanding inversion algorithms that effectively capture the prior information.

Recurrent Generative Adversarial Networks for Proximal Learning and Automated Compressive Image Recovery

no code implementations27 Nov 2017 Morteza Mardani, Hatef Monajemi, Vardan Papyan, Shreyas Vasanawala, David Donoho, John Pauly

Building effective priors is however challenged by the low train and test overhead dictated by real-time tasks; and the need for retrieving visually "plausible" and physically "feasible" images with minimal hallucination.

Denoising MRI Reconstruction

Deep Generative Adversarial Networks for Compressed Sensing Automates MRI

2 code implementations31 May 2017 Morteza Mardani, Enhao Gong, Joseph Y. Cheng, Shreyas Vasanawala, Greg Zaharchuk, Marcus Alley, Neil Thakur, Song Han, William Dally, John M. Pauly, Lei Xing

A multilayer convolutional neural network is then jointly trained based on diagnostic quality images to discriminate the projection quality.

MRI Reconstruction

Online Categorical Subspace Learning for Sketching Big Data with Misses

no code implementations27 Sep 2016 Yanning Shen, Morteza Mardani, Georgios B. Giannakis

The deterministic Probit and Tobit models treat data as quantized values of an analog-valued process lying in a low-dimensional subspace, while the probabilistic Logit model relies on low dimensionality of the data log-likelihood ratios.

Quantization

Tracking Tensor Subspaces with Informative Random Sampling for Real-Time MR Imaging

no code implementations14 Sep 2016 Morteza Mardani, Georgios B. Giannakis, Kamil Ugurbil

Alteranating majorization minimization is adopted to develop online algorithms that recursively procure the reconstruction upon arrival of a new undersampled $k$-space frame.

Subspace Learning and Imputation for Streaming Big Data Matrices and Tensors

no code implementations17 Apr 2014 Morteza Mardani, Gonzalo Mateos, Georgios B. Giannakis

In this context, the present paper permeates benefits from rank minimization to scalable imputation of missing data, via tracking low-dimensional subspaces and unraveling latent (possibly multi-way) structure from \emph{incomplete streaming} data.

Imputation

Cannot find the paper you are looking for? You can Submit a new open access paper.