1 code implementation • 1 Apr 2024 • Xiaoze Liu, Feijie Wu, Tianyang Xu, Zhuo Chen, Yichi Zhang, Xiaoqian Wang, Jing Gao
In this paper, we propose GraphEval to evaluate an LLM's performance using a substantially large test dataset.
no code implementations • 28 Sep 2023 • Tianci Liu, Haoyu Wang, Feijie Wu, Hengtong Zhang, Pan Li, Lu Su, Jing Gao
Fair machine learning seeks to mitigate model prediction bias against certain demographic subgroups such as elder and female.
no code implementations • 3 Dec 2022 • Shiqi He, Qifan Yan, Feijie Wu, Lanjun Wang, Mathias Lécuyer, Ivan Beschastnikh
Federated learning (FL) is an effective technique to directly involve edge devices in machine learning training while preserving client privacy.
1 code implementation • 13 Jun 2022 • Feijie Wu, Song Guo, Zhihao Qu, Shiqi He, Ziming Liu, Jing Gao
The lack of inactive clients' updates in partial client participation makes it more likely for the model aggregation to deviate from the aggregation based on full client participation.
no code implementations • 14 Apr 2022 • Feijie Wu, Shiqi He, Song Guo, Zhihao Qu, Haozhao Wang, Weihua Zhuang, Jie Zhang
Traditional one-bit compressed stochastic gradient descent can not be directly employed in multi-hop all-reduce, a widely adopted distributed training paradigm in network-intensive high-performance computing systems such as public clouds.
1 code implementation • 17 Dec 2021 • Feijie Wu, Song Guo, Haozhao Wang, Zhihao Qu, Haobo Zhang, Jie Zhang, Ziming Liu
In the setting of federated optimization, where a global model is aggregated periodically, step asynchronism occurs when participants conduct model training by efficiently utilizing their computational resources.
1 code implementation • NeurIPS 2021 • Jie Zhang, Song Guo, Xiaosong Ma, Haozhao Wang, Wencao Xu, Feijie Wu
To deal with such model constraints, we exploit the potentials of heterogeneous model settings and propose a novel training framework to employ personalized models for different clients.