Search Results for author: Fan Lai

Found 10 papers, 4 papers with code

Learn To be Efficient: Build Structured Sparsity in Large Language Models

no code implementations9 Feb 2024 Haizhong Zheng, Xiaoyan Bai, Beidi Chen, Fan Lai, Atul Prakash

The emergence of activation sparsity in LLMs provides a natural approach to reduce this cost by involving only parts of the parameters for inference.

Text Generation

Venn: Resource Management Across Federated Learning Jobs

no code implementations13 Dec 2023 Jiachen Liu, Fan Lai, Ding Ding, Yiwen Zhang, Mosharaf Chowdhury

Scheduling edge resources among multiple FL jobs is different from GPU scheduling for cloud ML because of the ephemeral nature and planetary scale of participating devices as well as the overlapping resource requirements of diverse FL jobs.

Federated Learning Management +1

Auxo: Efficient Federated Learning via Scalable Client Clustering

no code implementations29 Oct 2022 Jiachen Liu, Fan Lai, Yinwei Dai, Aditya Akella, Harsha Madhyastha, Mosharaf Chowdhury

In this paper, we explore an additional layer of complexity to mitigate such heterogeneity by grouping clients with statistically similar data distributions (cohorts).

Clustering Federated Learning

Coverage-centric Coreset Selection for High Pruning Rates

1 code implementation28 Oct 2022 Haizhong Zheng, Rui Liu, Fan Lai, Atul Prakash

We then propose a novel one-shot coreset selection method, Coverage-centric Coreset Selection (CCS), that jointly considers overall data coverage upon a distribution as well as the importance of each example.

Vocal Bursts Intensity Prediction

Swan: A Neural Engine for Efficient DNN Training on Smartphone SoCs

no code implementations9 Jun 2022 Sanjay Sri Vallabh Singapuram, Fan Lai, Chuheng Hu, Mosharaf Chowdhury

The need to train DNN models on end-user devices (e. g., smartphones) is increasing with the need to improve data privacy and reduce communication overheads.

Egeria: Efficient DNN Training with Knowledge-Guided Layer Freezing

no code implementations17 Jan 2022 Yiding Wang, Decang Sun, Kai Chen, Fan Lai, Mosharaf Chowdhury

To explore this, we first introduce the notion of training plasticity to quantify the training progress of internal DNN layers.

Quantization

Fed-ensemble: Improving Generalization through Model Ensembling in Federated Learning

1 code implementation21 Jul 2021 Naichen Shi, Fan Lai, Raed Al Kontar, Mosharaf Chowdhury

In this paper we propose Fed-ensemble: a simple approach that bringsmodel ensembling to federated learning (FL).

Federated Learning

FedScale: Benchmarking Model and System Performance of Federated Learning at Scale

3 code implementations24 May 2021 Fan Lai, Yinwei Dai, Sanjay S. Singapuram, Jiachen Liu, Xiangfeng Zhu, Harsha V. Madhyastha, Mosharaf Chowdhury

We present FedScale, a federated learning (FL) benchmarking suite with realistic datasets and a scalable runtime to enable reproducible FL research.

Benchmarking Federated Learning +6

Oort: Efficient Federated Learning via Guided Participant Selection

1 code implementation12 Oct 2020 Fan Lai, Xiangfeng Zhu, Harsha V. Madhyastha, Mosharaf Chowdhury

In this paper, we propose Oort to improve the performance of federated training and testing with guided participant selection.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.