Search Results for author: Chaoyang He

Found 38 papers, 20 papers with code

LLM Multi-Agent Systems: Challenges and Open Problems

no code implementations5 Feb 2024 Shanshan Han, Qifan Zhang, Yuhang Yao, Weizhao Jin, Zhaozhuo Xu, Chaoyang He

This paper explores existing works of multi-agent systems and identifies challenges that remain inadequately addressed.


Kick Bad Guys Out! Zero-Knowledge-Proof-Based Anomaly Detection in Federated Learning

no code implementations6 Oct 2023 Shanshan Han, Wenxuan Wu, Baturalp Buyukates, Weizhao Jin, Qifan Zhang, Yuhang Yao, Salman Avestimehr, Chaoyang He

Federated Learning (FL) systems are vulnerable to adversarial attacks, where malicious clients submit poisoned models to prevent the global model from converging or plant backdoors to induce the global model to misclassify some samples.

Anomaly Detection Federated Learning

FedML Parrot: A Scalable Federated Learning System via Heterogeneity-aware Scheduling on Sequential and Hierarchical Training

1 code implementation3 Mar 2023 Zhenheng Tang, Xiaowen Chu, Ryan Yide Ran, Sunwoo Lee, Shaohuai Shi, Yonggang Zhang, Yuxin Wang, Alex Qiaozhong Liang, Salman Avestimehr, Chaoyang He

It improves the training efficiency, remarkably relaxes the requirements on the hardware, and supports efficient large-scale FL experiments with stateful clients by: (1) sequential training clients on devices; (2) decomposing original aggregation into local and global aggregation on devices and server respectively; (3) scheduling tasks to mitigate straggler problems and enhance computing utility; (4) distributed client state manager to support various FL algorithms.

Federated Learning Scheduling

Proof-of-Contribution-Based Design for Collaborative Machine Learning on Blockchain

no code implementations27 Feb 2023 Baturalp Buyukates, Chaoyang He, Shanshan Han, Zhiyong Fang, Yupeng Zhang, Jieyi Long, Ali Farahanchi, Salman Avestimehr

Our goal is to design a data marketplace for such decentralized collaborative/federated learning applications that simultaneously provides i) proof-of-contribution based reward allocation so that the trainers are compensated based on their contributions to the trained model; ii) privacy-preserving decentralized model training by avoiding any data movement from data owners; iii) robustness against malicious parties (e. g., trainers aiming to poison the model); iv) verifiability in the sense that the integrity, i. e., correctness, of all computations in the data market protocol including contribution assessment and outlier detection are verifiable through zero-knowledge proofs; and v) efficient and universal design.

Federated Learning Outlier Detection +1

Federated Analytics: A survey

no code implementations2 Feb 2023 Ahmed Roushdy Elkordy, Yahya H. Ezzeldin, Shanshan Han, Shantanu Sharma, Chaoyang He, Sharad Mehrotra, Salman Avestimehr

Federated analytics (FA) is a privacy-preserving framework for computing data analytics over multiple remote parties (e. g., mobile devices) or silo-ed institutional entities (e. g., hospitals, banks) without sharing the data among parties.

Federated Learning Privacy Preserving

SMILE: Scaling Mixture-of-Experts with Efficient Bi-level Routing

no code implementations10 Dec 2022 Chaoyang He, Shuai Zheng, Aston Zhang, George Karypis, Trishul Chilimbi, Mahdi Soltanolkotabi, Salman Avestimehr

The mixture of Expert (MoE) parallelism is a recent advancement that scales up the model size with constant computational cost.

Partial Model Averaging in Federated Learning: Performance Guarantees and Benefits

no code implementations11 Jan 2022 Sunwoo Lee, Anit Kumar Sahu, Chaoyang He, Salman Avestimehr

We propose a partial model averaging framework that mitigates the model discrepancy issue in Federated Learning.

Federated Learning

SPIDER: Searching Personalized Neural Architecture for Federated Learning

no code implementations27 Dec 2021 Erum Mushtaq, Chaoyang He, Jie Ding, Salman Avestimehr

However, given that clients' data are invisible to the server and data distributions are non-identical across clients, a predefined architecture discovered in a centralized setting may not be an optimal solution for all the clients in FL.

Federated Learning Neural Architecture Search

AutoCTS: Automated Correlated Time Series Forecasting -- Extended Version

no code implementations21 Dec 2021 Xinle Wu, Dalin Zhang, Chenjuan Guo, Chaoyang He, Bin Yang, Christian S. Jensen

Specifically, we design both a micro and a macro search space to model possible architectures of ST-blocks and the connections among heterogeneous ST-blocks, and we provide a search strategy that is able to jointly explore the search spaces to identify optimal forecasting models.

Correlated Time Series Forecasting Time Series

FedCV: A Federated Learning Framework for Diverse Computer Vision Tasks

1 code implementation22 Nov 2021 Chaoyang He, Alay Dilipbhai Shah, Zhenheng Tang, Di Fan1Adarshan Naiynar Sivashunmugam, Keerti Bhogaraju, Mita Shimpi, Li Shen, Xiaowen Chu, Mahdi Soltanolkotabi, Salman Avestimehr

To bridge the gap and facilitate the development of FL for computer vision tasks, in this work, we propose a federated learning library and benchmarking framework, named FedCV, to evaluate FL on the three most representative computer vision tasks: image classification, image segmentation, and object detection.

Benchmarking Federated Learning +5

Federated Learning for Internet of Things: Applications, Challenges, and Opportunities

no code implementations15 Nov 2021 Tuo Zhang, Lei Gao, Chaoyang He, Mi Zhang, Bhaskar Krishnamachari, Salman Avestimehr

In this paper, we will discuss the opportunities and challenges of FL in IoT platforms, as well as how it can enable diverse IoT applications.

Federated Learning

MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge

1 code implementation NeurIPS 2021 Geng Yuan, Xiaolong Ma, Wei Niu, Zhengang Li, Zhenglun Kong, Ning Liu, Yifan Gong, Zheng Zhan, Chaoyang He, Qing Jin, Siyue Wang, Minghai Qin, Bin Ren, Yanzhi Wang, Sijia Liu, Xue Lin

Systematical evaluation on accuracy, training speed, and memory footprint are conducted, where the proposed MEST framework consistently outperforms representative SOTA works.

Layer-wise Adaptive Model Aggregation for Scalable Federated Learning

no code implementations19 Oct 2021 Sunwoo Lee, Tuo Zhang, Chaoyang He, Salman Avestimehr

In Federated Learning, a common approach for aggregating local models across clients is periodic averaging of the full model parameters.

Federated Learning

SSFL: Tackling Label Deficiency in Federated Learning via Personalized Self-Supervision

no code implementations6 Oct 2021 Chaoyang He, Zhengyu Yang, Erum Mushtaq, Sunwoo Lee, Mahdi Soltanolkotabi, Salman Avestimehr

In this paper we propose self-supervised federated learning (SSFL), a unified self-supervised and personalized federated learning framework, and a series of algorithms under this framework which work towards addressing these challenges.

Personalized Federated Learning Self-Supervised Learning

FairFed: Enabling Group Fairness in Federated Learning

no code implementations2 Oct 2021 Yahya H. Ezzeldin, Shen Yan, Chaoyang He, Emilio Ferrara, Salman Avestimehr

Training ML models which are fair across different demographic groups is of critical importance due to the increased integration of ML in crucial decision-making scenarios such as healthcare and recruitment.

Decision Making Fairness +1

LightSecAgg: a Lightweight and Versatile Design for Secure Aggregation in Federated Learning

no code implementations29 Sep 2021 Jinhyun So, Chaoyang He, Chien-Sheng Yang, Songze Li, Qian Yu, Ramy E. Ali, Basak Guler, Salman Avestimehr

We also demonstrate that, unlike existing schemes, LightSecAgg can be applied to secure aggregation in the asynchronous FL setting.

Federated Learning

FedNAS: Federated Deep Learning via Neural Architecture Search

no code implementations29 Sep 2021 Chaoyang He, Erum Mushtaq, Jie Ding, Salman Avestimehr

Federated Learning (FL) is an effective learning framework used when data cannotbe centralized due to privacy, communication costs, and regulatory restrictions. While there have been many algorithmic advances in FL, significantly less effort hasbeen made on model development, and most works in FL employ predefined modelarchitectures discovered in the centralized environment.

Federated Learning Meta-Learning +1

OmniLytics: A Blockchain-based Secure Data Market for Decentralized Machine Learning

no code implementations12 Jul 2021 Jiacheng Liang, Songze Li, Bochuan Cao, Wensi Jiang, Chaoyang He

Utilizing OmniLytics, many distributed data owners can contribute their private data to collectively train an ML model requested by some model owners, and receive compensation for data contribution.

BIG-bench Machine Learning

Federated Learning for Internet of Things: A Federated Learning Framework for On-device Anomaly Data Detection

1 code implementation15 Jun 2021 Tuo Zhang, Chaoyang He, Tianhao Ma, Lei Gao, Mark Ma, Salman Avestimehr

In this paper, to further push forward this direction with a comprehensive study in both algorithm and system design, we build FedIoT platform that contains FedDetect algorithm for on-device anomaly data detection and a system design for realistic evaluation of federated learning on IoT devices.

Anomaly Detection Federated Learning +2

SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural Networks

1 code implementation4 Jun 2021 Chaoyang He, Emir Ceyani, Keshav Balasubramanian, Murali Annavaram, Salman Avestimehr

This work proposes SpreadGNN, a novel multi-task federated training framework capable of operating in the presence of partial labels and absence of a central server for the first time in the literature.

BIG-bench Machine Learning Federated Learning +3

Differentiable Neural Architecture Search for Extremely Lightweight Image Super-Resolution

1 code implementation9 May 2021 Han Huang, Li Shen, Chaoyang He, Weisheng Dong, Wei Liu

Specifically, the cell-level search space is designed based on an information distillation mechanism, focusing on the combinations of lightweight operations and aiming to build a more lightweight and accurate SR structure.

Image Super-Resolution Neural Architecture Search +2

FedNLP: Benchmarking Federated Learning Methods for Natural Language Processing Tasks

1 code implementation Findings (NAACL) 2022 Bill Yuchen Lin, Chaoyang He, Zihang Zeng, Hulin Wang, Yufen Huang, Christophe Dupuy, Rahul Gupta, Mahdi Soltanolkotabi, Xiang Ren, Salman Avestimehr

Increasing concerns and regulations about data privacy and sparsity necessitate the study of privacy-preserving, decentralized learning methods for natural language processing (NLP) tasks.

Benchmarking Federated Learning +5

FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks

1 code implementation14 Apr 2021 Chaoyang He, Keshav Balasubramanian, Emir Ceyani, Carl Yang, Han Xie, Lichao Sun, Lifang He, Liangwei Yang, Philip S. Yu, Yu Rong, Peilin Zhao, Junzhou Huang, Murali Annavaram, Salman Avestimehr

FedGraphNN is built on a unified formulation of graph FL and contains a wide range of datasets from different domains, popular GNN models, and FL algorithms, with secure and efficient system support.

Federated Learning Molecular Property Prediction

PipeTransformer: Automated Elastic Pipelining for Distributed Training of Transformers

1 code implementation5 Feb 2021 Chaoyang He, Shen Li, Mahdi Soltanolkotabi, Salman Avestimehr

PipeTransformer automatically adjusts the pipelining and data parallelism by identifying and freezing some layers during the training, and instead allocates resources for training of the remaining active layers.

Towards Non-I.I.D. and Invisible Data with FedNAS: Federated Deep Learning via Neural Architecture Search

1 code implementation18 Apr 2020 Chaoyang He, Murali Annavaram, Salman Avestimehr

Federated Learning (FL) has been proved to be an effective learning framework when data cannot be centralized due to privacy, communication costs, and regulatory restrictions.

Federated Learning Neural Architecture Search

MiLeNAS: Efficient Neural Architecture Search via Mixed-Level Reformulation

1 code implementation CVPR 2020 Chaoyang He, Haishan Ye, Li Shen, Tong Zhang

To remedy this, this paper proposes \mldas, a mixed-level reformulation for NAS that can be optimized efficiently and reliably.

Bilevel Optimization Neural Architecture Search +1

Central Server Free Federated Learning over Single-sided Trust Social Networks

1 code implementation11 Oct 2019 Chaoyang He, Conghui Tan, Hanlin Tang, Shuang Qiu, Ji Liu

However, in many social network scenarios, centralized federated learning is not applicable (e. g., a central agent or server connecting all users may not exist, or the communication cost to the central server is not affordable).

Federated Learning

Collecting Indicators of Compromise from Unstructured Text of Cybersecurity Articles using Neural-Based Sequence Labelling

no code implementations4 Jul 2019 Zi Long, Lianzhi Tan, Shengping Zhou, Chaoyang He, Xin Liu

Indicators of Compromise (IOCs) are artifacts observed on a network or in an operating system that can be utilized to indicate a computer intrusion and detect cyber-attacks in an early stage.


Efficient Spatial Anti-Aliasing Rendering for Line Joins on Vector Maps

no code implementations27 Jun 2019 Chaoyang He, Ming Li

The spatial anti-aliasing technique for line joins (intersections of the road segments) on vector maps is exclusively crucial to visual experience and system performance.

Graphics Computational Geometry

Cascade-BGNN: Toward Efficient Self-supervised Representation Learning on Large-scale Bipartite Graphs

1 code implementation27 Jun 2019 Chaoyang He, Tian Xie, Yu Rong, Wenbing Huang, Junzhou Huang, Xiang Ren, Cyrus Shahabi

Existing techniques either cannot be scaled to large-scale bipartite graphs that have limited labels or cannot exploit the unique structure of bipartite graphs, which have distinct node features in two domains.

Recommendation Systems Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.