no code implementations • NeurIPS 2023 • Haibo Yang, Zhuqing Liu, Jia Liu, Chaosheng Dong, Michinari Momma
In recent years, multi-objective optimization (MOO) emerges as a foundational problem underpinning many multi-agent multi-task learning applications.
no code implementations • 5 Mar 2023 • Zhuqing Liu, Xin Zhang, Songtao Lu, Jia Liu
Decentralized min-max optimization problems with domain constraints underpins many important ML applications, including multi-agent ML fairness assurance, and policy evaluations in multi-agent reinforcement learning.
no code implementations • 5 Dec 2022 • Peiwen Qiu, Yining Li, Zhuqing Liu, Prashant Khanduri, Jia Liu, Ness B. Shroff, Elizabeth Serena Bentley, Kurt Turck
Decentralized bilevel optimization has received increasing attention recently due to its foundational role in many emerging multi-agent learning paradigms (e. g., multi-agent meta-learning and multi-agent reinforcement learning) over peer-to-peer edge networks.
no code implementations • 2 Oct 2022 • Haibo Yang, Zhuqing Liu, Xin Zhang, Jia Liu
To lower the communication complexity of federated min-max learning, a natural approach is to utilize the idea of infrequent communications (through multiple local updates) same as in conventional federated learning.
no code implementations • 17 Aug 2022 • Zhuqing Liu, Xin Zhang, Jia Liu
To increase the training speed of distributed learning, recent years have witnessed a significant amount of interest in developing both synchronous and asynchronous distributed stochastic variance-reduced optimization methods.
no code implementations • 17 Aug 2022 • Xin Zhang, Minghong Fang, Zhuqing Liu, Haibo Yang, Jia Liu, Zhengyuan Zhu
Moreover, whether or not the linear speedup for convergence is achievable under fully decentralized FL with data heterogeneity remains an open question.
no code implementations • 27 Jul 2022 • Zhuqing Liu, Xin Zhang, Prashant Khanduri, Songtao Lu, Jia Liu
Our main contributions in this paper are two-fold: i) We first propose a deterministic algorithm called INTERACT (inner-gradient-descent-outer-tracked-gradient) that requires the sample complexity of $\mathcal{O}(n \epsilon^{-1})$ and communication complexity of $\mathcal{O}(\epsilon^{-1})$ to solve the bilevel optimization problem, where $n$ and $\epsilon > 0$ are the number of samples at each agent and the desired stationarity gap, respectively.
no code implementations • 11 Jul 2022 • Luning Bi, Yunlong Wang, Fan Zhang, Zhuqing Liu, Yong Cai, Emily Zhao
In the past decade, with the development of big data technology, an increasing amount of patient information has been stored as electronic health records (EHRs).
no code implementations • NeurIPS 2021 • Xin Zhang, Zhuqing Liu, Jia Liu, Zhengyuan Zhu, Songtao Lu
To our knowledge, this paper is the first work that achieves both $\mathcal{O}(\epsilon^{-2})$ sample complexity and $\mathcal{O}(\epsilon^{-2})$ communication complexity in decentralized policy evaluation for cooperative MARL.
Multi-agent Reinforcement Learning Reinforcement Learning (RL) +1
no code implementations • 11 Sep 2018 • Zhuqing Liu, Liyuanjun Lai, Lin Zhang
Simulation workflow is a top-level model for the design and control of simulation process.