Search Results for author: Salman Avestimehr

Found 43 papers, 10 papers with code

ActPerFL: Active Personalized Federated Learning

no code implementations FL4NLP (ACL) 2022 Huili Chen, Jie Ding, Eric Tramel, Shuang Wu, Anit Kumar Sahu, Salman Avestimehr, Tao Zhang

Inspired by Bayesian hierarchical models, we develop ActPerFL, a self-aware personalized FL method where each client can automatically balance the training of its local personal model and the global model that implicitly contributes to other clients’ training.

Personalized Federated Learning

Secure Federated Clustering

no code implementations31 May 2022 Songze Li, Sizai Hou, Baturalp Buyukates, Salman Avestimehr

We consider a foundational unsupervised learning task of $k$-means data clustering, in a federated learning (FL) setting consisting of a central server and many distributed clients.

Federated Learning

Toward a Geometrical Understanding of Self-supervised Contrastive Learning

no code implementations13 May 2022 Romain Cosentino, Anirvan Sengupta, Salman Avestimehr, Mahdi Soltanolkotabi, Antonio Ortega, Ted Willke, Mariano Tepper

When used for transfer learning, the projector is discarded since empirical results show that its representation generalizes more poorly than the encoder's.

Contrastive Learning Data Augmentation +2

Federated Learning with Noisy User Feedback

no code implementations6 May 2022 Rahul Sharma, Anil Ramakrishna, Ansel MacLaughlin, Anna Rumshisky, Jimit Majmudar, Clement Chung, Salman Avestimehr, Rahul Gupta

Federated learning (FL) has recently emerged as a method for training ML models on edge devices using sensitive user data and is seen as a way to mitigate concerns over data privacy.

Federated Learning Text Classification

Self-Aware Personalized Federated Learning

no code implementations17 Apr 2022 Huili Chen, Jie Ding, Eric Tramel, Shuang Wu, Anit Kumar Sahu, Salman Avestimehr, Tao Zhang

In the context of personalized federated learning (FL), the critical challenge is to balance local model improvement and global model tuning when the personal and global objectives may not be exactly aligned.

Personalized Federated Learning

Learnings from Federated Learning in the Real world

no code implementations8 Feb 2022 Christophe Dupuy, Tanya G. Roosta, Leo Long, Clement Chung, Rahul Gupta, Salman Avestimehr

In this study, we evaluate the impact of such idiosyncrasies on Natural Language Understanding (NLU) models trained using FL.

Federated Learning Natural Language Understanding

FedSpace: An Efficient Federated Learning Framework at Satellites and Ground Stations

no code implementations2 Feb 2022 Jinhyun So, Kevin Hsieh, Behnaz Arzani, Shadi Noghabi, Salman Avestimehr, Ranveer Chandra

To address these challenges, we leverage Federated Learning (FL), where ground stations and satellites collaboratively train a global ML model without sharing the captured images on the satellites.

Federated Learning

Federated Learning Challenges and Opportunities: An Outlook

no code implementations1 Feb 2022 Jie Ding, Eric Tramel, Anit Kumar Sahu, Shuang Wu, Salman Avestimehr, Tao Zhang

Federated learning (FL) has been developed as a promising framework to leverage the resources of edge devices, enhance customers' privacy, comply with regulations, and reduce development costs.

Federated Learning

Partial Model Averaging in Federated Learning: Performance Guarantees and Benefits

no code implementations11 Jan 2022 Sunwoo Lee, Anit Kumar Sahu, Chaoyang He, Salman Avestimehr

We propose a partial model averaging framework that mitigates the model discrepancy issue in Federated Learning.

Federated Learning

SPIDER: Searching Personalized Neural Architecture for Federated Learning

no code implementations27 Dec 2021 Erum Mushtaq, Chaoyang He, Jie Ding, Salman Avestimehr

However, given that clients' data are invisible to the server and data distributions are non-identical across clients, a predefined architecture discovered in a centralized setting may not be an optimal solution for all the clients in FL.

Federated Learning Neural Architecture Search

FedCV: A Federated Learning Framework for Diverse Computer Vision Tasks

1 code implementation22 Nov 2021 Chaoyang He, Alay Dilipbhai Shah, Zhenheng Tang, Di Fan1Adarshan Naiynar Sivashunmugam, Keerti Bhogaraju, Mita Shimpi, Li Shen, Xiaowen Chu, Mahdi Soltanolkotabi, Salman Avestimehr

To bridge the gap and facilitate the development of FL for computer vision tasks, in this work, we propose a federated learning library and benchmarking framework, named FedCV, to evaluate FL on the three most representative computer vision tasks: image classification, image segmentation, and object detection.

Benchmark Computer Vision +5

Federated Learning for Internet of Things: Applications, Challenges, and Opportunities

no code implementations15 Nov 2021 Tuo Zhang, Lei Gao, Chaoyang He, Mi Zhang, Bhaskar Krishnamachari, Salman Avestimehr

In this paper, we will discuss the opportunities and challenges of FL in IoT platforms, as well as how it can enable diverse IoT applications.

Federated Learning

Layer-wise Adaptive Model Aggregation for Scalable Federated Learning

no code implementations19 Oct 2021 Sunwoo Lee, Tuo Zhang, Chaoyang He, Salman Avestimehr

In Federated Learning, a common approach for aggregating local models across clients is periodic averaging of the full model parameters.

Federated Learning

SSFL: Tackling Label Deficiency in Federated Learning via Personalized Self-Supervision

no code implementations6 Oct 2021 Chaoyang He, Zhengyu Yang, Erum Mushtaq, Sunwoo Lee, Mahdi Soltanolkotabi, Salman Avestimehr

In this paper we propose self-supervised federated learning (SSFL), a unified self-supervised and personalized federated learning framework, and a series of algorithms under this framework which work towards addressing these challenges.

Personalized Federated Learning Self-Supervised Learning

FairFed: Enabling Group Fairness in Federated Learning

no code implementations2 Oct 2021 Yahya H. Ezzeldin, Shen Yan, Chaoyang He, Emilio Ferrara, Salman Avestimehr

Motivated by the importance and challenges of group fairness in federated learning, in this work, we propose FairFed, a novel algorithm to enhance group fairness via a fairness-aware aggregation method, which aims to provide fair model performance across different sensitive groups (e. g., racial, gender groups) while maintaining high utility.

Decision Making Fairness +1

FedNAS: Federated Deep Learning via Neural Architecture Search

no code implementations29 Sep 2021 Chaoyang He, Erum Mushtaq, Jie Ding, Salman Avestimehr

Federated Learning (FL) is an effective learning framework used when data cannotbe centralized due to privacy, communication costs, and regulatory restrictions. While there have been many algorithmic advances in FL, significantly less effort hasbeen made on model development, and most works in FL employ predefined modelarchitectures discovered in the centralized environment.

Federated Learning Meta-Learning +1

Fundamental Limits of Transfer Learning in Binary Classifications

no code implementations29 Sep 2021 Mohammadreza Mousavi Kalan, Salman Avestimehr, Mahdi Soltanolkotabi

Transfer learning is gaining traction as a promising technique to alleviate this barrier by utilizing the data of a related but different \emph{source} task to compensate for the lack of data in a \emph{target} task where there are few labeled training data.

Action Recognition Image Classification +1

Achieving Small-Batch Accuracy with Large-Batch Scalability via Adaptive Learning Rate Adjustment

no code implementations29 Sep 2021 Sunwoo Lee, Salman Avestimehr

The framework performs extra epochs using the large learning rate even after the loss is flattened.

SLIM-QN: A Stochastic, Light, Momentumized Quasi-Newton Optimizer for Deep Neural Networks

no code implementations29 Sep 2021 Yue Niu, Zalan Fabian, Sunwoo Lee, Mahdi Soltanolkotabi, Salman Avestimehr

SLIM-QN addresses two key barriers in existing second-order methods for large-scale DNNs: 1) the high computational cost of obtaining the Hessian matrix and its inverse in every iteration (e. g. KFAC); 2) convergence instability due to stochastic training (e. g. L-BFGS).

Second-order methods

LightSecAgg: a Lightweight and Versatile Design for Secure Aggregation in Federated Learning

no code implementations29 Sep 2021 Jinhyun So, Chaoyang He, Chien-Sheng Yang, Songze Li, Qian Yu, Ramy E. Ali, Basak Guler, Salman Avestimehr

We also demonstrate that, unlike existing schemes, LightSecAgg can be applied to secure aggregation in the asynchronous FL setting.

Federated Learning

Adaptive Verifiable Coded Computing: Towards Fast, Secure and Private Distributed Machine Learning

no code implementations27 Jul 2021 Tingting Tang, Ramy E. Ali, Hanieh Hashemi, Tynan Gangwani, Salman Avestimehr, Murali Annavaram

Much of the overhead in prior schemes comes from the fact that they tightly couple coding for all three problems into a single framework.

Federated Learning for Internet of Things: A Federated Learning Framework for On-device Anomaly Data Detection

1 code implementation15 Jun 2021 Tuo Zhang, Chaoyang He, Tianhao Ma, Lei Gao, Mark Ma, Salman Avestimehr

In this paper, to further push forward this direction with a comprehensive study in both algorithm and system design, we build FedIoT platform that contains FedDetect algorithm for on-device anomaly data detection and a system design for realistic evaluation of federated learning on IoT devices.

Anomaly Detection Federated Learning +1

Securing Secure Aggregation: Mitigating Multi-Round Privacy Leakage in Federated Learning

no code implementations7 Jun 2021 Jinhyun So, Ramy E. Ali, Basak Guler, Jiantao Jiao, Salman Avestimehr

In fact, we empirically show that the conventional random user selection strategies for federated learning lead to leaking users' individual models within number of rounds linear in the number of users.

Fairness Federated Learning

SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural Networks

1 code implementation4 Jun 2021 Chaoyang He, Emir Ceyani, Keshav Balasubramanian, Murali Annavaram, Salman Avestimehr

This work proposes SpreadGNN, a novel multi-task federated training framework capable of operating in the presence of partial labels and absence of a central server for the first time in the literature.

Federated Learning Molecular Property Prediction +1

FedNLP: Benchmarking Federated Learning Methods for Natural Language Processing Tasks

1 code implementation18 Apr 2021 Bill Yuchen Lin, Chaoyang He, Zihang Zeng, Hulin Wang, Yufen Huang, Christophe Dupuy, Rahul Gupta, Mahdi Soltanolkotabi, Xiang Ren, Salman Avestimehr

Increasing concerns and regulations about data privacy and sparsity necessitate the study of privacy-preserving, decentralized learning methods for natural language processing (NLP) tasks.

Federated Learning Language Modelling +4

FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks

1 code implementation14 Apr 2021 Chaoyang He, Keshav Balasubramanian, Emir Ceyani, Carl Yang, Han Xie, Lichao Sun, Lifang He, Liangwei Yang, Philip S. Yu, Yu Rong, Peilin Zhao, Junzhou Huang, Murali Annavaram, Salman Avestimehr

FedGraphNN is built on a unified formulation of graph FL and contains a wide range of datasets from different domains, popular GNN models, and FL algorithms, with secure and efficient system support.

Benchmark Federated Learning +1

PipeTransformer: Automated Elastic Pipelining for Distributed Training of Transformers

no code implementations5 Feb 2021 Chaoyang He, Shen Li, Mahdi Soltanolkotabi, Salman Avestimehr

PipeTransformer automatically adjusts the pipelining and data parallelism by identifying and freezing some layers during the training, and instead allocates resources for training of the remaining active layers.

Coded Computing for Low-Latency Federated Learning over Wireless Edge Networks

no code implementations12 Nov 2020 Saurav Prakash, Sagar Dhakal, Mustafa Akdeniz, Yair Yona, Shilpa Talwar, Salman Avestimehr, Nageen Himayat

For minimizing the epoch deadline time at the MEC server, we provide a tractable approach for finding the amount of coding redundancy and the number of local data points that a client processes during training, by exploiting the statistical properties of compute as well as communication delays.

Benchmark Edge-computing +1

Towards Non-I.I.D. and Invisible Data with FedNAS: Federated Deep Learning via Neural Architecture Search

1 code implementation18 Apr 2020 Chaoyang He, Murali Annavaram, Salman Avestimehr

Federated Learning (FL) has been proved to be an effective learning framework when data cannot be centralized due to privacy, communication costs, and regulatory restrictions.

Federated Learning Neural Architecture Search

Train Where the Data is: A Case for Bandwidth Efficient Coded Training

no code implementations22 Oct 2019 Zhifeng Lin, Krishna Giri Narra, Mingchao Yu, Salman Avestimehr, Murali Annavaram

Most of the model training is performed on high performance compute nodes and the training data is stored near these nodes for faster training.

Collage Inference: Using Coded Redundancy for Low Variance Distributed Image Classification

no code implementations27 Apr 2019 Krishna Giri Narra, Zhifeng Lin, Ganesh Ananthanarayanan, Salman Avestimehr, Murali Annavaram

Deploying the collage-cnn models in the cloud, we demonstrate that the 99th percentile tail latency of inference can be reduced by 1. 2x to 2x compared to replication based approaches while providing high accuracy.

Classification General Classification +2

Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training

no code implementations NeurIPS 2018 Youjie Li, Mingchao Yu, Songze Li, Salman Avestimehr, Nam Sung Kim, Alexander Schwing

Distributed training of deep nets is an important technique to address some of the present day computing challenges like memory consumption and computational demands.

Lagrange Coded Computing: Optimal Design for Resiliency, Security and Privacy

no code implementations4 Jun 2018 Qian Yu, Songze Li, Netanel Raviv, Seyed Mohammadreza Mousavi Kalan, Mahdi Soltanolkotabi, Salman Avestimehr

We consider a scenario involving computations over a massive dataset stored distributedly across multiple workers, which is at the core of distributed learning algorithms.

Distributed Solution of Large-Scale Linear Systems via Accelerated Projection-Based Consensus

no code implementations4 Aug 2017 Navid Azizan-Ruhi, Farshad Lahouti, Salman Avestimehr, Babak Hassibi

In this paper, we consider a common scenario in which a taskmaster intends to solve a large-scale system of linear equations by distributing subsets of the equations among a number of computing machines/cores.

A Sampling Theory Perspective of Graph-based Semi-supervised Learning

no code implementations26 May 2017 Aamir Anis, Aly El Gamal, Salman Avestimehr, Antonio Ortega

In this work, we reinforce this connection by viewing the problem from a graph sampling theoretic perspective, where class indicator functions are treated as bandlimited graph signals (in the eigenvector basis of the graph Laplacian) and label prediction as a bandlimited reconstruction problem.

Graph Sampling

Active Learning for Community Detection in Stochastic Block Models

no code implementations8 May 2016 Akshay Gadde, Eyal En Gad, Salman Avestimehr, Antonio Ortega

Our main result is to show that, under certain conditions, sampling the labels of a vanishingly small fraction of nodes (a number sub-linear in $n$) is sufficient for exact community detection even when $D(a, b)<1$.

Active Learning Community Detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.