no code implementations • 20 Aug 2024 • Jaideep Pathak, Yair Cohen, Piyush Garg, Peter Harrington, Noah Brenowitz, Dale Durran, Morteza Mardani, Arash Vahdat, Shaoming Xu, Karthik Kashinath, Michael Pritchard
Storm-scale convection-allowing models (CAMs) are an important tool for predicting the evolution of thunderstorms and mesoscale convective systems that result in damaging extreme weather.
1 code implementation • 24 Jun 2024 • Nicolas Zilberstein, Morteza Mardani, Santiago Segarra
For constrained sampling we focus on inverse problems in the latent space that leads to an augmented variational formulation, that strikes a good balance between compute, quality and diversity.
no code implementations • 19 Jun 2024 • Peter Manshausen, Yair Cohen, Jaideep Pathak, Mike Pritchard, Piyush Garg, Morteza Mardani, Karthik Kashinath, Simon Byrne, Noah Brenowitz
Data assimilation of observational data into full atmospheric states is essential for weather forecast model initialization.
no code implementations • 14 May 2024 • Weili Nie, Sifei Liu, Morteza Mardani, Chao Liu, Benjamin Eckart, Arash Vahdat
To leverage the compositionality of large language models (LLMs), we introduce a new in-context learning approach to generate blob representations from text prompts.
no code implementations • 4 Apr 2024 • Jason Stock, Jaideep Pathak, Yair Cohen, Mike Pritchard, Piyush Garg, Dale Durran, Morteza Mardani, Noah Brenowitz
This work presents an autoregressive generative diffusion model (DiffObs) to predict the global evolution of daily precipitation, trained on a satellite observational product, and assessed with domain-specific diagnostics.
no code implementations • 8 Jan 2024 • Dejia Xu, Ye Yuan, Morteza Mardani, Sifei Liu, Jiaming Song, Zhangyang Wang, Arash Vahdat
To overcome these challenges, we introduce an Amortized Generative 3D Gaussian framework (AGG) that instantly produces 3D Gaussians from a single image, eliminating the need for per-instance optimization.
2 code implementations • 3 Oct 2023 • Batu Ozturkler, Chao Liu, Benjamin Eckart, Morteza Mardani, Jiaming Song, Jan Kautz
However, diffusion models require careful tuning of inference hyperparameters on a validation set and are still sensitive to distribution shifts during testing.
no code implementations • 24 Sep 2023 • Morteza Mardani, Noah Brenowitz, Yair Cohen, Jaideep Pathak, Chieh-Yu Chen, Cheng-Chin Liu, Arash Vahdat, Mohammad Amin Nabian, Tao Ge, Akshay Subramaniam, Karthik Kashinath, Jan Kautz, Mike Pritchard
The model is trained to predict 2km data from a regional weather model over Taiwan, conditioned on a 25km global reanalysis.
2 code implementations • 5 Jun 2023 • Cagan Alkan, Morteza Mardani, Congyu Liao, Zhitao Li, Shreyas S. Vasanawala, John M. Pauly
Experiments on public 3D acquired MRI datasets show improved reconstruction quality of the proposed AutoSamp method over the prevailing variable density and variable density Poisson disc sampling for both compressed sensing and deep learning reconstructions.
1 code implementation • 7 May 2023 • Morteza Mardani, Jiaming Song, Jan Kautz, Arash Vahdat
To cope with this challenge, we propose a variational approach that by design seeks to approximate the true posterior distribution.
no code implementations • 8 Aug 2022 • Thorsten Kurth, Shashank Subramanian, Peter Harrington, Jaideep Pathak, Morteza Mardani, David Hall, Andrea Miele, Karthik Kashinath, Animashree Anandkumar
Extreme weather amplified by climate change is causing increasingly devastating impacts across the globe.
1 code implementation • 18 Jul 2022 • Batu Ozturkler, Arda Sahiner, Tolga Ergen, Arjun D Desai, Christopher M Sandino, Shreyas Vasanawala, John M Pauly, Morteza Mardani, Mert Pilanci
However, they require several iterations of a large neural network to handle high-dimensional imaging tasks such as 3D MRI.
no code implementations • 17 May 2022 • Arda Sahiner, Tolga Ergen, Batu Ozturkler, John Pauly, Morteza Mardani, Mert Pilanci
Vision transformers using self-attention or its proposed alternatives have demonstrated promising results in many image related tasks.
5 code implementations • 22 Feb 2022 • Jaideep Pathak, Shashank Subramanian, Peter Harrington, Sanjeev Raja, Ashesh Chattopadhyay, Morteza Mardani, Thorsten Kurth, David Hall, Zongyi Li, Kamyar Azizzadenesheli, Pedram Hassanzadeh, Karthik Kashinath, Animashree Anandkumar
FourCastNet accurately forecasts high-resolution, fast-timescale variables such as the surface wind speed, precipitation, and atmospheric water vapor.
3 code implementations • 24 Nov 2021 • John Guibas, Morteza Mardani, Zongyi Li, Andrew Tao, Anima Anandkumar, Bryan Catanzaro
AFNO is based on a principled foundation of operator learning which allows us to frame token mixing as a continuous global convolution without any dependence on the input resolution.
no code implementations • NeurIPS Workshop Deep_Invers 2021 • Batu Ozturkler, Arda Sahiner, Tolga Ergen, Arjun D Desai, John M. Pauly, Shreyas Vasanawala, Morteza Mardani, Mert Pilanci
Model-based deep learning approaches have recently shown state-of-the-art performance for accelerated MRI reconstruction.
no code implementations • ICLR 2022 • John Guibas, Morteza Mardani, Zongyi Li, Andrew Tao, Anima Anandkumar, Bryan Catanzaro
AFNO is based on a principled foundation of operator learning which allows us to frame token mixing as a continuous global convolution without any dependence on the input resolution.
1 code implementation • ICLR 2022 • Arda Sahiner, Tolga Ergen, Batu Ozturkler, Burak Bartan, John Pauly, Morteza Mardani, Mert Pilanci
In this work, we analyze the training of Wasserstein GANs with two-layer neural network discriminators through the lens of convex duality, and for a variety of generators expose the conditions under which Wasserstein GANs can be solved exactly with convex optimization approaches, or can be represented as convex-concave games.
no code implementations • ICLR 2022 • Tolga Ergen, Arda Sahiner, Batu Ozturkler, John Pauly, Morteza Mardani, Mert Pilanci
Batch Normalization (BN) is a commonly used technique to accelerate and stabilize training of deep neural networks.
no code implementations • ICLR 2021 • Arda Sahiner, Morteza Mardani, Batu Ozturkler, Mert Pilanci, John Pauly
Neural networks have shown tremendous potential for reconstructing high-resolution images in inverse problems.
no code implementations • NeurIPS 2020 • Morteza Mardani, Guilin Liu, Aysegul Dundar, Shiqiu Liu, Andrew Tao, Bryan Catanzaro
The conventional CNNs, recently adopted for synthesis, require to train and test on the same set of images and fail to generalize to unseen images.
no code implementations • 23 Oct 2020 • Cagan Alkan, Morteza Mardani, Shreyas Vasanawala, John M. Pauly
Accelerating MRI scans requires optimal sampling of k-space data.
no code implementations • 23 Oct 2020 • Vineet Edupuganti, Morteza Mardani, Shreyas Vasanawala, John M. Pauly
Reliable medical image recovery is crucial for accurate patient diagnoses, but little prior work has centered on quantifying uncertainty when using non-transparent deep learning approaches to reconstruct high-quality images from limited measured data.
no code implementations • 30 Sep 2020 • Edgar A. Rios Piedra, Morteza Mardani, Frank Ong, Ukash Nakarmi, Joseph Y. Cheng, Shreyas Vasanawala
Dynamic contrast-enhanced magnetic resonance imaging (DCE- MRI) is a widely used multi-phase technique routinely used in clinical practice.
no code implementations • 15 Oct 2019 • Ke Lei, Morteza Mardani, John M. Pauly, Shreyas S. Vasanawala
The reconstruction networks consist of a generator which suppresses the input image artifacts, and a discriminator using a pool of (unpaired) labels to adjust the reconstruction quality.
no code implementations • 10 Jun 2019 • Morteza Mardani, Qingyun Sun, Vardan Papyan, Shreyas Vasanawala, John Pauly, David Donoho
Leveraging the Stein's Unbiased Risk Estimator (SURE), this paper analyzes the generalization risk with its bias and variance components for recurrent unrolled networks.
1 code implementation • 19 Mar 2019 • Joseph Y. Cheng, Feiyu Chen, Christopher Sandino, Morteza Mardani, John M. Pauly, Shreyas S. Vasanawala
Data-driven learning provides a solution to address these challenges.
no code implementations • 31 Jan 2019 • Vineet Edupuganti, Morteza Mardani, Shreyas Vasanawala, John Pauly
Reliable MRI is crucial for accurate interpretation in therapeutic and diagnostic tasks.
1 code implementation • NeurIPS 2018 • Morteza Mardani, Qingyun Sun, Shreyas Vasawanala, Vardan Papyan, Hatef Monajemi, John Pauly, David Donoho
Recovering high-resolution images from limited sensory data typically leads to a serious ill-posed inverse problem, demanding inversion algorithms that effectively capture the prior information.
no code implementations • 27 Nov 2017 • Morteza Mardani, Hatef Monajemi, Vardan Papyan, Shreyas Vasanawala, David Donoho, John Pauly
Building effective priors is however challenged by the low train and test overhead dictated by real-time tasks; and the need for retrieving visually "plausible" and physically "feasible" images with minimal hallucination.
2 code implementations • 31 May 2017 • Morteza Mardani, Enhao Gong, Joseph Y. Cheng, Shreyas Vasanawala, Greg Zaharchuk, Marcus Alley, Neil Thakur, Song Han, William Dally, John M. Pauly, Lei Xing
A multilayer convolutional neural network is then jointly trained based on diagnostic quality images to discriminate the projection quality.
no code implementations • 27 Sep 2016 • Yanning Shen, Morteza Mardani, Georgios B. Giannakis
The deterministic Probit and Tobit models treat data as quantized values of an analog-valued process lying in a low-dimensional subspace, while the probabilistic Logit model relies on low dimensionality of the data log-likelihood ratios.
no code implementations • 14 Sep 2016 • Morteza Mardani, Georgios B. Giannakis, Kamil Ugurbil
Alteranating majorization minimization is adopted to develop online algorithms that recursively procure the reconstruction upon arrival of a new undersampled $k$-space frame.
no code implementations • 17 Apr 2014 • Morteza Mardani, Gonzalo Mateos, Georgios B. Giannakis
In this context, the present paper permeates benefits from rank minimization to scalable imputation of missing data, via tracking low-dimensional subspaces and unraveling latent (possibly multi-way) structure from \emph{incomplete streaming} data.