Search Results for author: Arman Adibi

Found 9 papers, 1 papers with code

DASA: Delay-Adaptive Multi-Agent Stochastic Approximation

no code implementations25 Mar 2024 Nicolo Dal Fabbro, Arman Adibi, H. Vincent Poor, Sanjeev R. Kulkarni, Aritra Mitra, George J. Pappas

We consider a setting in which $N$ agents aim to speedup a common Stochastic Approximation (SA) problem by acting in parallel and communicating with a central server.

Avg Q-Learning +1

Stochastic Approximation with Delayed Updates: Finite-Time Rates under Markovian Sampling

no code implementations19 Feb 2024 Arman Adibi, Nicolo Dal Fabbro, Luca Schenato, Sanjeev Kulkarni, H. Vincent Poor, George J. Pappas, Hamed Hassani, Aritra Mitra

Motivated by applications in large-scale and multi-agent reinforcement learning, we study the non-asymptotic performance of stochastic approximation (SA) schemes with delayed updates under Markovian sampling.

Avg Multi-agent Reinforcement Learning +1

Score-Based Methods for Discrete Optimization in Deep Learning

no code implementations15 Oct 2023 Eric Lei, Arman Adibi, Hamed Hassani

One class of these problems involve objective functions which depend on neural networks, but optimization variables which are discrete.

Min-Max Optimization under Delays

no code implementations13 Jul 2023 Arman Adibi, Aritra Mitra, Hamed Hassani

Motivated by this gap, we examine the performance of standard min-max optimization algorithms with delayed gradient updates.

Adversarial Robustness Stochastic Optimization

Collaborative Linear Bandits with Adversarial Agents: Near-Optimal Regret Bounds

no code implementations6 Jun 2022 Aritra Mitra, Arman Adibi, George J. Pappas, Hamed Hassani

We consider a linear stochastic bandit problem involving $M$ agents that can collaborate via a central server to minimize regret.

Distributed Statistical Min-Max Learning in the Presence of Byzantine Agents

no code implementations7 Apr 2022 Arman Adibi, Aritra Mitra, George J. Pappas, Hamed Hassani

Recent years have witnessed a growing interest in the topic of min-max optimization, owing to its relevance in the context of generative adversarial networks (GANs), robust control and optimization, and reinforcement learning.

Minimax Optimization: The Case of Convex-Submodular

no code implementations1 Nov 2021 Arman Adibi, Aryan Mokhtari, Hamed Hassani

Prior literature has thus far mainly focused on studying such problems in the continuous domain, e. g., convex-concave minimax optimization is now understood to a significant extent.

Submodular Meta-Learning

1 code implementation NeurIPS 2020 Arman Adibi, Aryan Mokhtari, Hamed Hassani

Motivated by this terminology, we propose a novel meta-learning framework in the discrete domain where each task is equivalent to maximizing a set function under a cardinality constraint.

Meta-Learning

Optimal Algorithms for Submodular Maximization with Distributed Constraints

no code implementations30 Sep 2019 Alexander Robey, Arman Adibi, Brent Schlotfeldt, George J. Pappas, Hamed Hassani

Given this distributed setting, we develop Constraint-Distributed Continuous Greedy (CDCG), a message passing algorithm that converges to the tight $(1-1/e)$ approximation factor of the optimum global solution using only local computation and communication.

Cannot find the paper you are looking for? You can Submit a new open access paper.