no code implementations • 30 Jan 2023 • Brendan Lucier, Sarath Pattathil, Aleksandrs Slivkins, Mengxiao Zhang
We study a game between autobidding algorithms that compete in an online advertising platform.
no code implementations • 28 Dec 2022 • Asuman Ozdaglar, Sarath Pattathil, Jiawei Zhang, Kaiqing Zhang
Offline reinforcement learning (RL) aims to find an optimal policy for sequential decision-making using a pre-collected dataset, without further interaction with the environment.
no code implementations • 23 Oct 2022 • Sarath Pattathil, Kaiqing Zhang, Asuman Ozdaglar
We also generalize the results to certain function approximation settings.
no code implementations • 9 Jun 2022 • Asuman Ozdaglar, Sarath Pattathil, Jiawei Zhang, Kaiqing Zhang
Minimax optimization has served as the backbone of many machine learning (ML) problems.
no code implementations • NeurIPS 2020 • Noah Golowich, Sarath Pattathil, Constantinos Daskalakis
We also show that the $O(1/\sqrt{T})$ rate is tight for all $p$-SCLI algorithms, which includes OG as a special case.
no code implementations • 13 Feb 2020 • Alireza Fallah, Asuman Ozdaglar, Sarath Pattathil
Next, we propose a multistage variant of stochastic GDA (M-GDA) that runs in multiple stages with a particular learning rate decay schedule and converges to the exact solution of the minimax problem.
no code implementations • 31 Jan 2020 • Noah Golowich, Sarath Pattathil, Constantinos Daskalakis, Asuman Ozdaglar
In this paper we study the smooth convex-concave saddle point problem.
no code implementations • 31 Oct 2019 • Weijie Liu, Aryan Mokhtari, Asuman Ozdaglar, Sarath Pattathil, Zebang Shen, Nenggan Zheng
In this paper, we focus on solving a class of constrained non-convex non-concave saddle point problems in a decentralized manner by a group of nodes in a network.
no code implementations • 3 Jun 2019 • Aryan Mokhtari, Asuman Ozdaglar, Sarath Pattathil
To do so, we first show that both OGDA and EG can be interpreted as approximate variants of the proximal point method.
no code implementations • 24 Jan 2019 • Aryan Mokhtari, Asuman Ozdaglar, Sarath Pattathil
In this paper we consider solving saddle point problems using two variants of Gradient Descent-Ascent algorithms, Extra-gradient (EG) and Optimistic Gradient Descent Ascent (OGDA) methods.