no code implementations • 25 Mar 2024 • Nicolo Dal Fabbro, Arman Adibi, H. Vincent Poor, Sanjeev R. Kulkarni, Aritra Mitra, George J. Pappas
We consider a setting in which $N$ agents aim to speedup a common Stochastic Approximation (SA) problem by acting in parallel and communicating with a central server.
no code implementations • 19 Feb 2024 • Arman Adibi, Nicolo Dal Fabbro, Luca Schenato, Sanjeev Kulkarni, H. Vincent Poor, George J. Pappas, Hamed Hassani, Aritra Mitra
Motivated by applications in large-scale and multi-agent reinforcement learning, we study the non-asymptotic performance of stochastic approximation (SA) schemes with delayed updates under Markovian sampling.
no code implementations • 15 Oct 2023 • Eric Lei, Arman Adibi, Hamed Hassani
One class of these problems involve objective functions which depend on neural networks, but optimization variables which are discrete.
no code implementations • 13 Jul 2023 • Arman Adibi, Aritra Mitra, Hamed Hassani
Motivated by this gap, we examine the performance of standard min-max optimization algorithms with delayed gradient updates.
no code implementations • 6 Jun 2022 • Aritra Mitra, Arman Adibi, George J. Pappas, Hamed Hassani
We consider a linear stochastic bandit problem involving $M$ agents that can collaborate via a central server to minimize regret.
no code implementations • 7 Apr 2022 • Arman Adibi, Aritra Mitra, George J. Pappas, Hamed Hassani
Recent years have witnessed a growing interest in the topic of min-max optimization, owing to its relevance in the context of generative adversarial networks (GANs), robust control and optimization, and reinforcement learning.
no code implementations • 1 Nov 2021 • Arman Adibi, Aryan Mokhtari, Hamed Hassani
Prior literature has thus far mainly focused on studying such problems in the continuous domain, e. g., convex-concave minimax optimization is now understood to a significant extent.
1 code implementation • NeurIPS 2020 • Arman Adibi, Aryan Mokhtari, Hamed Hassani
Motivated by this terminology, we propose a novel meta-learning framework in the discrete domain where each task is equivalent to maximizing a set function under a cardinality constraint.
no code implementations • 30 Sep 2019 • Alexander Robey, Arman Adibi, Brent Schlotfeldt, George J. Pappas, Hamed Hassani
Given this distributed setting, we develop Constraint-Distributed Continuous Greedy (CDCG), a message passing algorithm that converges to the tight $(1-1/e)$ approximation factor of the optimum global solution using only local computation and communication.