Search Results for author: Zalan Fabian

Found 13 papers, 3 papers with code

Serpent: Scalable and Efficient Image Restoration via Multi-scale Structured State Space Models

no code implementations26 Mar 2024 Mohammad Shahab Sepehri, Zalan Fabian, Mahdi Soltanolkotabi

The landscape of computational building blocks of efficient image restoration architectures is dominated by a combination of convolutional processing and various attention mechanisms.

Image Restoration

Adapt and Diffuse: Sample-adaptive Reconstruction via Latent Diffusion Models

no code implementations12 Sep 2023 Zalan Fabian, Berk Tınaz, Mahdi Soltanolkotabi

Our framework acts as a wrapper that can be combined with any latent diffusion-based baseline solver, imbuing it with sample-adaptivity and acceleration.

Computational Efficiency

mL-BFGS: A Momentum-based L-BFGS for Distributed Large-Scale Neural Network Optimization

no code implementations25 Jul 2023 Yue Niu, Zalan Fabian, Sunwoo Lee, Mahdi Soltanolkotabi, Salman Avestimehr

Quasi-Newton methods still face significant challenges in training large-scale neural networks due to additional compute costs in the Hessian related computations and instability issues in stochastic training.

Stochastic Optimization

DiracDiffusion: Denoising and Incremental Reconstruction with Assured Data-Consistency

no code implementations25 Mar 2023 Zalan Fabian, Berk Tınaz, Mahdi Soltanolkotabi

In this work, we propose a novel framework for inverse problem solving, namely we assume that the observation comes from a stochastic degradation process that gradually degrades and noises the original clean image.

Denoising Image Restoration

HUMUS-Net: Hybrid unrolled multi-scale network architecture for accelerated MRI reconstruction

2 code implementations15 Mar 2022 Zalan Fabian, Berk Tınaz, Mahdi Soltanolkotabi

These models split input images into non-overlapping patches, embed the patches into lower-dimensional tokens and utilize a self-attention mechanism that does not suffer from the aforementioned weaknesses of convolutional architectures.

 Ranked #1 on MRI Reconstruction on fastMRI Knee 8x (using extra training data)

Anatomy MRI Reconstruction

SLIM-QN: A Stochastic, Light, Momentumized Quasi-Newton Optimizer for Deep Neural Networks

no code implementations29 Sep 2021 Yue Niu, Zalan Fabian, Sunwoo Lee, Mahdi Soltanolkotabi, Salman Avestimehr

SLIM-QN addresses two key barriers in existing second-order methods for large-scale DNNs: 1) the high computational cost of obtaining the Hessian matrix and its inverse in every iteration (e. g. KFAC); 2) convergence instability due to stochastic training (e. g. L-BFGS).

Second-order methods

Data augmentation for deep learning based accelerated MRI reconstruction

no code implementations1 Jan 2021 Zalan Fabian, Reinhard Heckel, Mahdi Soltanolkotabi

Inspired by the success of Data Augmentation (DA) for classification problems, in this paper, we propose a pipeline for data augmentation for image reconstruction tasks arising in medical imaging and explore its effectiveness at reducing the required training data in a variety of settings.

Data Augmentation Image Restoration +1

Minimax Lower Bounds for Transfer Learning with Linear and One-hidden Layer Neural Networks

2 code implementations NeurIPS 2020 Seyed Mohammadreza Mousavi Kalan, Zalan Fabian, A. Salman Avestimehr, Mahdi Soltanolkotabi

In this approach a model trained for a source task, where plenty of labeled training data is available, is used as a starting point for training a model on a related target task with only few labeled training data.

Transfer Learning

GENERALIZATION GUARANTEES FOR NEURAL NETS VIA HARNESSING THE LOW-RANKNESS OF JACOBIAN

no code implementations25 Sep 2019 Samet Oymak, Zalan Fabian, Mingchen Li, Mahdi Soltanolkotabi

We show that over the information space learning is fast and one can quickly train a model with zero training loss that can also generalize well.

Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian

no code implementations12 Jun 2019 Samet Oymak, Zalan Fabian, Mingchen Li, Mahdi Soltanolkotabi

We show that over the information space learning is fast and one can quickly train a model with zero training loss that can also generalize well.

Cannot find the paper you are looking for? You can Submit a new open access paper.