Search Results for author: Zhuqing Liu

Found 10 papers, 0 papers with code

An empirical learning-based validation procedure for simulation workflow

no code implementations11 Sep 2018 Zhuqing Liu, Liyuanjun Lai, Lin Zhang

Simulation workflow is a top-level model for the design and control of simulation process.

Taming Communication and Sample Complexities in Decentralized Policy Evaluation for Cooperative Multi-Agent Reinforcement Learning

no code implementations NeurIPS 2021 Xin Zhang, Zhuqing Liu, Jia Liu, Zhengyuan Zhu, Songtao Lu

To our knowledge, this paper is the first work that achieves both $\mathcal{O}(\epsilon^{-2})$ sample complexity and $\mathcal{O}(\epsilon^{-2})$ communication complexity in decentralized policy evaluation for cooperative MARL.

Multi-agent Reinforcement Learning Reinforcement Learning (RL) +1

FD-GATDR: A Federated-Decentralized-Learning Graph Attention Network for Doctor Recommendation Using EHR

no code implementations11 Jul 2022 Luning Bi, Yunlong Wang, Fan Zhang, Zhuqing Liu, Yong Cai, Emily Zhao

In the past decade, with the development of big data technology, an increasing amount of patient information has been stored as electronic health records (EHRs).

Graph Attention Recommendation Systems

INTERACT: Achieving Low Sample and Communication Complexities in Decentralized Bilevel Learning over Networks

no code implementations27 Jul 2022 Zhuqing Liu, Xin Zhang, Prashant Khanduri, Songtao Lu, Jia Liu

Our main contributions in this paper are two-fold: i) We first propose a deterministic algorithm called INTERACT (inner-gradient-descent-outer-tracked-gradient) that requires the sample complexity of $\mathcal{O}(n \epsilon^{-1})$ and communication complexity of $\mathcal{O}(\epsilon^{-1})$ to solve the bilevel optimization problem, where $n$ and $\epsilon > 0$ are the number of samples at each agent and the desired stationarity gap, respectively.

Bilevel Optimization Meta-Learning +1

SYNTHESIS: A Semi-Asynchronous Path-Integrated Stochastic Gradient Method for Distributed Learning in Computing Clusters

no code implementations17 Aug 2022 Zhuqing Liu, Xin Zhang, Jia Liu

To increase the training speed of distributed learning, recent years have witnessed a significant amount of interest in developing both synchronous and asynchronous distributed stochastic variance-reduced optimization methods.

NET-FLEET: Achieving Linear Convergence Speedup for Fully Decentralized Federated Learning with Heterogeneous Data

no code implementations17 Aug 2022 Xin Zhang, Minghong Fang, Zhuqing Liu, Haibo Yang, Jia Liu, Zhengyuan Zhu

Moreover, whether or not the linear speedup for convergence is achievable under fully decentralized FL with data heterogeneity remains an open question.

Federated Learning Open-Ended Question Answering

SAGDA: Achieving $\mathcal{O}(ε^{-2})$ Communication Complexity in Federated Min-Max Learning

no code implementations2 Oct 2022 Haibo Yang, Zhuqing Liu, Xin Zhang, Jia Liu

To lower the communication complexity of federated min-max learning, a natural approach is to utilize the idea of infrequent communications (through multiple local updates) same as in conventional federated learning.

Federated Learning

DIAMOND: Taming Sample and Communication Complexities in Decentralized Bilevel Optimization

no code implementations5 Dec 2022 Peiwen Qiu, Yining Li, Zhuqing Liu, Prashant Khanduri, Jia Liu, Ness B. Shroff, Elizabeth Serena Bentley, Kurt Turck

Decentralized bilevel optimization has received increasing attention recently due to its foundational role in many emerging multi-agent learning paradigms (e. g., multi-agent meta-learning and multi-agent reinforcement learning) over peer-to-peer edge networks.

Bilevel Optimization Meta-Learning +1

PRECISION: Decentralized Constrained Min-Max Learning with Low Communication and Sample Complexities

no code implementations5 Mar 2023 Zhuqing Liu, Xin Zhang, Songtao Lu, Jia Liu

Decentralized min-max optimization problems with domain constraints underpins many important ML applications, including multi-agent ML fairness assurance, and policy evaluations in multi-agent reinforcement learning.

Fairness Multi-agent Reinforcement Learning

Federated Multi-Objective Learning

no code implementations NeurIPS 2023 Haibo Yang, Zhuqing Liu, Jia Liu, Chaosheng Dong, Michinari Momma

In recent years, multi-objective optimization (MOO) emerges as a foundational problem underpinning many multi-agent multi-task learning applications.

Federated Learning Multi-Task Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.