no code implementations • 25 Jul 2022 • Martin Figura, Yixuan Lin, Ji Liu, Vijay Gupta
In decentralized cooperative multi-agent reinforcement learning, agents can aggregate information from one another to learn policies that maximize a team-average objective function.
Multi-agent Reinforcement Learning Reinforcement Learning (RL)
no code implementations • 26 Mar 2022 • Jingxuan Zhu, Yixuan Lin, Alvaro Velasquez, Ji Liu
This paper considers a resilient high-dimensional constrained consensus problem and studies a resilient distributed algorithm for complete graphs.
no code implementations • NeurIPS 2021 • Yixuan Lin, Vijay Gupta, Ji Liu
While the convergence of consensus-based stochastic approximation algorithms when the interconnection among the agents is described by doubly stochastic matrices (at least in expectation) has been studied, less is known about the case when the interconnection matrix is simply stochastic.
1 code implementation • 12 Nov 2021 • Martin Figura, Yixuan Lin, Ji Liu, Vijay Gupta
We show that in the presence of Byzantine agents, whose estimation and communication strategies are completely arbitrary, the estimates of the cooperative agents converge to a bounded consensus value with probability one, provided that there are at most $H$ Byzantine agents in the neighborhood of each cooperative agent and the network is $(2H+1)$-robust.
Multi-agent Reinforcement Learning reinforcement-learning +1
no code implementations • 6 Jul 2019 • Yixuan Lin, Kaiqing Zhang, Zhuoran Yang, Zhaoran Wang, Tamer Başar, Romeil Sandhu, Ji Liu
This paper considers a distributed reinforcement learning problem in which a network of multiple agents aim to cooperatively maximize the globally averaged return through communication with only local neighbors.