Search Results for author: Maryam Mehri Dehnavi

Found 4 papers, 2 papers with code

MKOR: Momentum-Enabled Kronecker-Factor-Based Optimizer Using Rank-1 Updates

1 code implementation NeurIPS 2023 Mohammad Mozaffari, Sikan Li, Zhao Zhang, Maryam Mehri Dehnavi

This work proposes a Momentum-Enabled Kronecker-Factor-Based Optimizer Using Rank-1 updates, called MKOR, that improves the training time and convergence properties of deep neural networks (DNNs).

Second-order methods

TENGraD: Time-Efficient Natural Gradient Descent with Exact Fisher-Block Inversion

1 code implementation7 Jun 2021 Saeed Soori, Bugra Can, Baourun Mu, Mert Gürbüzbalaban, Maryam Mehri Dehnavi

This work proposes a time-efficient Natural Gradient Descent method, called TENGraD, with linear convergence guarantees.

Image Classification

ASYNC: A Cloud Engine with Asynchrony and History for Distributed Machine Learning

no code implementations19 Jul 2019 Saeed Soori, Bugra Can, Mert Gurbuzbalaba, Maryam Mehri Dehnavi

ASYNC is a framework that supports the implementation of asynchrony and history for optimization methods on distributed computing platforms.

BIG-bench Machine Learning Distributed Computing

Avoiding Communication in Proximal Methods for Convex Optimization Problems

no code implementations24 Oct 2017 Saeed Soori, Aditya Devarakonda, James Demmel, Mert Gurbuzbalaban, Maryam Mehri Dehnavi

We formulate the algorithm for two different optimization methods on the Lasso problem and show that the latency cost is reduced by a factor of k while bandwidth and floating-point operation costs remain the same.

Cannot find the paper you are looking for? You can Submit a new open access paper.