no code implementations • 25 Mar 2024 • Nicolò Dal Fabbro, Arman Adibi, H. Vincent Poor, Sanjeev R. Kulkarni, Aritra Mitra, George J. Pappas

We consider a setting in which $N$ agents aim to speedup a common Stochastic Approximation (SA) problem by acting in parallel and communicating with a central server.

no code implementations • 4 Mar 2024 • Aritra Mitra

We ask: \textit{Is it possible to retain the simplicity of a projection-based analysis without actually performing a projection step in the algorithm?}

no code implementations • 19 Feb 2024 • Arman Adibi, Nicolo Dal Fabbro, Luca Schenato, Sanjeev Kulkarni, H. Vincent Poor, George J. Pappas, Hamed Hassani, Aritra Mitra

Motivated by applications in large-scale and multi-agent reinforcement learning, we study the non-asymptotic performance of stochastic approximation (SA) schemes with delayed updates under Markovian sampling.

no code implementations • 27 Jan 2024 • Chenyu Zhang, Han Wang, Aritra Mitra, James Anderson

In response, we introduce FedSARSA, a novel federated on-policy reinforcement learning scheme, equipped with linear function approximation, to address these challenges and provide a comprehensive finite-time error analysis.

no code implementations • 2 Jan 2024 • Aritra Mitra, Lintao Ye, Vijay Gupta

Toward answering this question, we study a setting where a worker agent transmits quantized policy gradients (of the LQR cost) to a server over a noiseless channel with a finite bit-rate.

no code implementations • 13 Jul 2023 • Arman Adibi, Aritra Mitra, Hamed Hassani

Motivated by this gap, we examine the performance of standard min-max optimization algorithms with delayed gradient updates.

no code implementations • 14 May 2023 • Nicolò Dal Fabbro, Aritra Mitra, George J. Pappas

Federated learning (FL) has recently gained much attention due to its effectiveness in speeding up supervised learning tasks under communication and privacy constraints.

no code implementations • 4 Feb 2023 • Han Wang, Aritra Mitra, Hamed Hassani, George J. Pappas, James Anderson

We initiate the study of federated reinforcement learning under environmental heterogeneity by considering a policy evaluation problem.

no code implementations • 3 Jan 2023 • Aritra Mitra, George J. Pappas, Hamed Hassani

These works have collectively revealed that stochastic gradient descent (SGD) is robust to structured perturbations such as quantization, sparsification, and delays.

no code implementations • 6 Jun 2022 • Aritra Mitra, Arman Adibi, George J. Pappas, Hamed Hassani

We consider a linear stochastic bandit problem involving $M$ agents that can collaborate via a central server to minimize regret.

no code implementations • 25 May 2022 • Mohammad Pirani, Aritra Mitra, Shreyas Sundaram

As the scale of networked control systems increases and interactions between different subsystems become more sophisticated, questions of the resilience of such networks increase in importance.

no code implementations • 7 Apr 2022 • Arman Adibi, Aritra Mitra, George J. Pappas, Hamed Hassani

Recent years have witnessed a growing interest in the topic of min-max optimization, owing to its relevance in the context of generative adversarial networks (GANs), robust control and optimization, and reinforcement learning.

no code implementations • 2 Mar 2022 • Aritra Mitra, Hamed Hassani, George J. Pappas

Specifically, in our setup, an agent interacting with an environment transmits encoded estimates of an unknown model parameter to a server over a communication channel of finite capacity.

no code implementations • 13 Sep 2021 • Aritra Mitra, Hamed Hassani, George Pappas

We study a federated variant of the best-arm identification problem in stochastic multi-armed bandits: a set of clients, each of whom can sample only a subset of the arms, collaborate via a server to identify the best arm (i. e., the arm with the highest mean reward) with prescribed confidence.

no code implementations • NeurIPS 2021 • Aritra Mitra, Rayana Jaafar, George J. Pappas, Hamed Hassani

We consider a standard federated learning (FL) architecture where a group of clients periodically coordinate with a central server to train a statistical model.

no code implementations • 6 Jan 2021 • Yanwen Mao, Aritra Mitra, Shreyas Sundaram, Paulo Tabuada

To better understand this, we show that when the $\mathbf{A}$ matrix of the linear system has unitary geometric multiplicity, the gap disappears, i. e., eigenvalue observability coincides with sparse observability, and there exists a polynomial time algorithm to reconstruct the state provided the state can be reconstructed.

no code implementations • 21 Nov 2020 • Lintao Ye, Aritra Mitra, Shreyas Sundaram

We then show that the data source selection problem can be transformed into an instance of the submodular set covering problem studied in the literature, and provide a standard greedy algorithm to solve the data source selection problem with provable performance guarantees.

no code implementations • 2 Apr 2020 • Shreyas Sundaram, Aritra Mitra

We consider the problem of distributed hypothesis testing (or social learning) where a network of agents seeks to identify the true state of the world from a finite set of hypotheses, based on a series of stochastic signals that each agent receives.

no code implementations • 2 Apr 2020 • Aritra Mitra, John A. Richards, Saurabh Bagchi, Shreyas Sundaram

We prove that our rule guarantees convergence to the true state exponentially fast almost surely despite sparse communication, and that it has the potential to significantly reduce information flow from uninformative agents to informative agents.

no code implementations • 4 Sep 2019 • Aritra Mitra, John A. Richards, Shreyas Sundaram

We introduce a simple time-triggered protocol to achieve communication-efficient non-Bayesian learning over a network.

no code implementations • 5 Jul 2019 • Aritra Mitra, John A. Richards, Shreyas Sundaram

We study a setting where a group of agents, each receiving partially informative private signals, seek to collaboratively learn the true underlying state of the world (from a finite set of hypotheses) that generates their joint observation profiles.

no code implementations • 14 Mar 2019 • Aritra Mitra, John A. Richards, Shreyas Sundaram

Under minimal requirements on the signal structures of the agents and the underlying communication graph, we establish consistency of the proposed belief update rule, i. e., we show that the actual beliefs of the agents asymptotically concentrate on the true state almost surely.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.