Search Results for author: Chaoyang He

Found 28 papers, 15 papers with code

Partial Model Averaging in Federated Learning: Performance Guarantees and Benefits

no code implementations11 Jan 2022 Sunwoo Lee, Anit Kumar Sahu, Chaoyang He, Salman Avestimehr

We propose a partial model averaging framework that mitigates the model discrepancy issue in Federated Learning.

Federated Learning

SPIDER: Searching Personalized Neural Architecture for Federated Learning

no code implementations27 Dec 2021 Erum Mushtaq, Chaoyang He, Jie Ding, Salman Avestimehr

However, given that clients' data are invisible to the server and data distributions are non-identical across clients, a predefined architecture discovered in a centralized setting may not be an optimal solution for all the clients in FL.

Federated Learning Neural Architecture Search

AutoCTS: Automated Correlated Time Series Forecasting -- Extended Version

no code implementations21 Dec 2021 Xinle Wu, Dalin Zhang, Chenjuan Guo, Chaoyang He, Bin Yang, Christian S. Jensen

Specifically, we design both a micro and a macro search space to model possible architectures of ST-blocks and the connections among heterogeneous ST-blocks, and we provide a search strategy that is able to jointly explore the search spaces to identify optimal forecasting models.

Time Series Time Series Forecasting

FedCV: A Federated Learning Framework for Diverse Computer Vision Tasks

1 code implementation22 Nov 2021 Chaoyang He, Alay Dilipbhai Shah, Zhenheng Tang, Di Fan1Adarshan Naiynar Sivashunmugam, Keerti Bhogaraju, Mita Shimpi, Li Shen, Xiaowen Chu, Mahdi Soltanolkotabi, Salman Avestimehr

To bridge the gap and facilitate the development of FL for computer vision tasks, in this work, we propose a federated learning library and benchmarking framework, named FedCV, to evaluate FL on the three most representative computer vision tasks: image classification, image segmentation, and object detection.

Federated Learning Image Classification +2

Federated Learning for Internet of Things: Applications, Challenges, and Opportunities

no code implementations15 Nov 2021 Tuo Zhang, Lei Gao, Chaoyang He, Mi Zhang, Bhaskar Krishnamachari, Salman Avestimehr

In this paper, we will discuss the opportunities and challenges of FL in IoT platforms, as well as how it can enable diverse IoT applications.

Federated Learning

MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge

1 code implementation NeurIPS 2021 Geng Yuan, Xiaolong Ma, Wei Niu, Zhengang Li, Zhenglun Kong, Ning Liu, Yifan Gong, Zheng Zhan, Chaoyang He, Qing Jin, Siyue Wang, Minghai Qin, Bin Ren, Yanzhi Wang, Sijia Liu, Xue Lin

Systematical evaluation on accuracy, training speed, and memory footprint are conducted, where the proposed MEST framework consistently outperforms representative SOTA works.

Layer-wise Adaptive Model Aggregation for Scalable Federated Learning

no code implementations19 Oct 2021 Sunwoo Lee, Tuo Zhang, Chaoyang He, Salman Avestimehr

In Federated Learning, a common approach for aggregating local models across clients is periodic averaging of the full model parameters.

Federated Learning

SSFL: Tackling Label Deficiency in Federated Learning via Personalized Self-Supervision

no code implementations6 Oct 2021 Chaoyang He, Zhengyu Yang, Erum Mushtaq, Sunwoo Lee, Mahdi Soltanolkotabi, Salman Avestimehr

In this paper we propose self-supervised federated learning (SSFL), a unified self-supervised and personalized federated learning framework, and a series of algorithms under this framework which work towards addressing these challenges.

Personalized Federated Learning Self-Supervised Learning

FairFed: Enabling Group Fairness in Federated Learning

no code implementations2 Oct 2021 Yahya H. Ezzeldin, Shen Yan, Chaoyang He, Emilio Ferrara, Salman Avestimehr

Motivated by the importance and challenges of group fairness in federated learning, in this work, we propose FairFed, a novel algorithm to enhance group fairness via a fairness-aware aggregation method, which aims to provide fair model performance across different sensitive groups (e. g., racial, gender groups) while maintaining high utility.

Decision Making Fairness +1

FedNAS: Federated Deep Learning via Neural Architecture Search

no code implementations29 Sep 2021 Chaoyang He, Erum Mushtaq, Jie Ding, Salman Avestimehr

Federated Learning (FL) is an effective learning framework used when data cannotbe centralized due to privacy, communication costs, and regulatory restrictions. While there have been many algorithmic advances in FL, significantly less effort hasbeen made on model development, and most works in FL employ predefined modelarchitectures discovered in the centralized environment.

Federated Learning Meta-Learning +1

LightSecAgg: a Lightweight and Versatile Design for Secure Aggregation in Federated Learning

no code implementations29 Sep 2021 Jinhyun So, Chaoyang He, Chien-Sheng Yang, Songze Li, Qian Yu, Ramy E. Ali, Basak Guler, Salman Avestimehr

We also demonstrate that, unlike existing schemes, LightSecAgg can be applied to secure aggregation in the asynchronous FL setting.

Federated Learning

OmniLytics: A Blockchain-based Secure Data Market for Decentralized Machine Learning

no code implementations12 Jul 2021 Jiacheng Liang, Songze Li, Bochuan Cao, Wensi Jiang, Chaoyang He

Utilizing OmniLytics, many distributed data owners can contribute their private data to collectively train an ML model requested by some model owners, and receive compensation for data contribution.

Federated Learning for Internet of Things: A Federated Learning Framework for On-device Anomaly Data Detection

1 code implementation15 Jun 2021 Tuo Zhang, Chaoyang He, Tianhao Ma, Lei Gao, Mark Ma, Salman Avestimehr

In this paper, to further push forward this direction with a comprehensive study in both algorithm and system design, we build FedIoT platform that contains FedDetect algorithm for on-device anomaly data detection and a system design for realistic evaluation of federated learning on IoT devices.

Anomaly Detection Federated Learning +1

SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural Networks

1 code implementation4 Jun 2021 Chaoyang He, Emir Ceyani, Keshav Balasubramanian, Murali Annavaram, Salman Avestimehr

This work proposes SpreadGNN, a novel multi-task federated training framework capable of operating in the presence of partial labels and absence of a central server for the first time in the literature.

Federated Learning Molecular Property Prediction +1

Lightweight Image Super-Resolution with Hierarchical and Differentiable Neural Architecture Search

1 code implementation9 May 2021 Han Huang, Li Shen, Chaoyang He, Weisheng Dong, HaoZhi Huang, Guangming Shi

Specifically, the cell-level search space is designed based on an information distillation mechanism, focusing on the combinations of lightweight operations and aiming to build a more lightweight and accurate SR structure.

Image Super-Resolution Neural Architecture Search +1

FedNLP: Benchmarking Federated Learning Methods for Natural Language Processing Tasks

1 code implementation18 Apr 2021 Bill Yuchen Lin, Chaoyang He, Zihang Zeng, Hulin Wang, Yufen Huang, Christophe Dupuy, Rahul Gupta, Mahdi Soltanolkotabi, Xiang Ren, Salman Avestimehr

Increasing concerns and regulations about data privacy and sparsity necessitate the study of privacy-preserving, decentralized learning methods for natural language processing (NLP) tasks.

Federated Learning Language Modelling +2

FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks

1 code implementation14 Apr 2021 Chaoyang He, Keshav Balasubramanian, Emir Ceyani, Carl Yang, Han Xie, Lichao Sun, Lifang He, Liangwei Yang, Philip S. Yu, Yu Rong, Peilin Zhao, Junzhou Huang, Murali Annavaram, Salman Avestimehr

FedGraphNN is built on a unified formulation of graph FL and contains a wide range of datasets from different domains, popular GNN models, and FL algorithms, with secure and efficient system support.

Federated Learning Molecular Property Prediction

PipeTransformer: Automated Elastic Pipelining for Distributed Training of Transformers

no code implementations5 Feb 2021 Chaoyang He, Shen Li, Mahdi Soltanolkotabi, Salman Avestimehr

PipeTransformer automatically adjusts the pipelining and data parallelism by identifying and freezing some layers during the training, and instead allocates resources for training of the remaining active layers.

Towards Non-I.I.D. and Invisible Data with FedNAS: Federated Deep Learning via Neural Architecture Search

1 code implementation18 Apr 2020 Chaoyang He, Murali Annavaram, Salman Avestimehr

Federated Learning (FL) has been proved to be an effective learning framework when data cannot be centralized due to privacy, communication costs, and regulatory restrictions.

Federated Learning Neural Architecture Search

MiLeNAS: Efficient Neural Architecture Search via Mixed-Level Reformulation

1 code implementation CVPR 2020 Chaoyang He, Haishan Ye, Li Shen, Tong Zhang

To remedy this, this paper proposes \mldas, a mixed-level reformulation for NAS that can be optimized efficiently and reliably.

Bilevel Optimization Neural Architecture Search

Central Server Free Federated Learning over Single-sided Trust Social Networks

1 code implementation11 Oct 2019 Chaoyang He, Conghui Tan, Hanlin Tang, Shuang Qiu, Ji Liu

However, in many social network scenarios, centralized federated learning is not applicable (e. g., a central agent or server connecting all users may not exist, or the communication cost to the central server is not affordable).

Federated Learning

Collecting Indicators of Compromise from Unstructured Text of Cybersecurity Articles using Neural-Based Sequence Labelling

no code implementations4 Jul 2019 Zi Long, Lianzhi Tan, Shengping Zhou, Chaoyang He, Xin Liu

Indicators of Compromise (IOCs) are artifacts observed on a network or in an operating system that can be utilized to indicate a computer intrusion and detect cyber-attacks in an early stage.

Efficient Spatial Anti-Aliasing Rendering for Line Joins on Vector Maps

no code implementations27 Jun 2019 Chaoyang He, Ming Li

The spatial anti-aliasing technique for line joins (intersections of the road segments) on vector maps is exclusively crucial to visual experience and system performance.

Graphics Computational Geometry

Cascade-BGNN: Toward Efficient Self-supervised Representation Learning on Large-scale Bipartite Graphs

1 code implementation27 Jun 2019 Chaoyang He, Tian Xie, Yu Rong, Wenbing Huang, Junzhou Huang, Xiang Ren, Cyrus Shahabi

Existing techniques either cannot be scaled to large-scale bipartite graphs that have limited labels or cannot exploit the unique structure of bipartite graphs, which have distinct node features in two domains.

Recommendation Systems Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.