Search Results for author: Feijie Wu

Found 7 papers, 4 papers with code

Evaluating the Factuality of Large Language Models using Large-Scale Knowledge Graphs

1 code implementation1 Apr 2024 Xiaoze Liu, Feijie Wu, Tianyang Xu, Zhuo Chen, Yichi Zhang, Xiaoqian Wang, Jing Gao

In this paper, we propose GraphEval to evaluate an LLM's performance using a substantially large test dataset.

Knowledge Graphs

Towards Poisoning Fair Representations

no code implementations28 Sep 2023 Tianci Liu, Haoyu Wang, Feijie Wu, Hengtong Zhang, Pan Li, Lu Su, Jing Gao

Fair machine learning seeks to mitigate model prediction bias against certain demographic subgroups such as elder and female.

Bilevel Optimization Data Poisoning +2

GlueFL: Reconciling Client Sampling and Model Masking for Bandwidth Efficient Federated Learning

no code implementations3 Dec 2022 Shiqi He, Qifan Yan, Feijie Wu, Lanjun Wang, Mathias Lécuyer, Ivan Beschastnikh

Federated learning (FL) is an effective technique to directly involve edge devices in machine learning training while preserving client privacy.

Federated Learning Model Compression

Anchor Sampling for Federated Learning with Partial Client Participation

1 code implementation13 Jun 2022 Feijie Wu, Song Guo, Zhihao Qu, Shiqi He, Ziming Liu, Jing Gao

The lack of inactive clients' updates in partial client participation makes it more likely for the model aggregation to deviate from the aggregation based on full client participation.

Federated Learning

Sign Bit is Enough: A Learning Synchronization Framework for Multi-hop All-reduce with Ultimate Compression

no code implementations14 Apr 2022 Feijie Wu, Shiqi He, Song Guo, Zhihao Qu, Haozhao Wang, Weihua Zhuang, Jie Zhang

Traditional one-bit compressed stochastic gradient descent can not be directly employed in multi-hop all-reduce, a widely adopted distributed training paradigm in network-intensive high-performance computing systems such as public clouds.

From Deterioration to Acceleration: A Calibration Approach to Rehabilitating Step Asynchronism in Federated Optimization

1 code implementation17 Dec 2021 Feijie Wu, Song Guo, Haozhao Wang, Zhihao Qu, Haobo Zhang, Jie Zhang, Ziming Liu

In the setting of federated optimization, where a global model is aggregated periodically, step asynchronism occurs when participants conduct model training by efficiently utilizing their computational resources.

Parameterized Knowledge Transfer for Personalized Federated Learning

1 code implementation NeurIPS 2021 Jie Zhang, Song Guo, Xiaosong Ma, Haozhao Wang, Wencao Xu, Feijie Wu

To deal with such model constraints, we exploit the potentials of heterogeneous model settings and propose a novel training framework to employ personalized models for different clients.

Personalized Federated Learning Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.