no code implementations • 19 Feb 2022 • Bugra Can, Mert Gurbuzbalaban, Necdet Serhat Aybat
In this work, we consider strongly convex strongly concave (SCSC) saddle point (SP) problems $\min_{x\in\mathbb{R}^{d_x}}\max_{y\in\mathbb{R}^{d_y}}f(x, y)$ where $f$ is $L$-smooth, $f(., y)$ is $\mu$-strongly convex for every $y$, and $f(x,.
1 code implementation • 7 Jun 2021 • Saeed Soori, Bugra Can, Baourun Mu, Mert Gürbüzbalaban, Maryam Mehri Dehnavi
This work proposes a time-efficient Natural Gradient Descent method, called TENGraD, with linear convergence guarantees.
no code implementations • NeurIPS 2020 • Yossi Arjevani, Joan Bruna, Bugra Can, Mert Gürbüzbalaban, Stefanie Jegelka, Hongzhou Lin
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.
no code implementations • 19 Jul 2019 • Saeed Soori, Bugra Can, Mert Gurbuzbalaba, Maryam Mehri Dehnavi
ASYNC is a framework that supports the implementation of asynchrony and history for optimization methods on distributed computing platforms.
no code implementations • 22 Jan 2019 • Bugra Can, Mert Gurbuzbalaban, Lingjiong Zhu
In the special case of strongly convex quadratic objectives, we can show accelerated linear rates in the $p$-Wasserstein metric for any $p\geq 1$ with improved sensitivity to noise for both AG and HB through a non-asymptotic analysis under some additional assumptions on the noise structure.