2 code implementations • 3 Oct 2023 • Batu Ozturkler, Chao Liu, Benjamin Eckart, Morteza Mardani, Jiaming Song, Jan Kautz
However, diffusion models require careful tuning of inference hyperparameters on a validation set and are still sensitive to distribution shifts during testing.
no code implementations • 10 May 2023 • Julio A. Oscanoa, Frank Ong, Siddharth S. Iyer, Zhitao Li, Christopher M. Sandino, Batu Ozturkler, Daniel B. Ennis, Mert Pilanci, Shreyas S. Vasanawala
Results: First, we performed ablation experiments to validate the sketching matrix design on both Cartesian and non-Cartesian datasets.
no code implementations • 17 Oct 2022 • Dave Van Veen, Rogier van der Sluijs, Batu Ozturkler, Arjun Desai, Christian Bluethgen, Robert D. Boutin, Marc H. Willis, Gordon Wetzstein, David Lindell, Shreyas Vasanawala, John Pauly, Akshay S. Chaudhari
We propose using a coordinate network decoder for the task of super-resolution in MRI.
no code implementations • 4 Oct 2022 • Batu Ozturkler, Nikolay Malkin, Zhen Wang, Nebojsa Jojic
Our results suggest that because the probabilistic inference in ThinkSum is performed outside of calls to the LLM, ThinkSum is less sensitive to prompt design, yields more interpretable predictions, and can be flexibly combined with latent variable models to extract structured knowledge from LLMs.
1 code implementation • 18 Jul 2022 • Batu Ozturkler, Arda Sahiner, Tolga Ergen, Arjun D Desai, Christopher M Sandino, Shreyas Vasanawala, John M Pauly, Morteza Mardani, Mert Pilanci
However, they require several iterations of a large neural network to handle high-dimensional imaging tasks such as 3D MRI.
no code implementations • 17 May 2022 • Arda Sahiner, Tolga Ergen, Batu Ozturkler, John Pauly, Morteza Mardani, Mert Pilanci
Vision transformers using self-attention or its proposed alternatives have demonstrated promising results in many image related tasks.
no code implementations • NeurIPS Workshop Deep_Invers 2021 • Batu Ozturkler, Arda Sahiner, Tolga Ergen, Arjun D Desai, John M. Pauly, Shreyas Vasanawala, Morteza Mardani, Mert Pilanci
Model-based deep learning approaches have recently shown state-of-the-art performance for accelerated MRI reconstruction.
1 code implementation • ICLR 2022 • Arda Sahiner, Tolga Ergen, Batu Ozturkler, Burak Bartan, John Pauly, Morteza Mardani, Mert Pilanci
In this work, we analyze the training of Wasserstein GANs with two-layer neural network discriminators through the lens of convex duality, and for a variety of generators expose the conditions under which Wasserstein GANs can be solved exactly with convex optimization approaches, or can be represented as convex-concave games.
no code implementations • ICLR 2022 • Tolga Ergen, Arda Sahiner, Batu Ozturkler, John Pauly, Morteza Mardani, Mert Pilanci
Batch Normalization (BN) is a commonly used technique to accelerate and stabilize training of deep neural networks.
no code implementations • ICLR 2021 • Arda Sahiner, Morteza Mardani, Batu Ozturkler, Mert Pilanci, John Pauly
Neural networks have shown tremendous potential for reconstructing high-resolution images in inverse problems.