Search Results for author: Canh T. Dinh

Found 5 papers, 4 papers with code

On the Generalization of Wasserstein Robust Federated Learning

no code implementations3 Jun 2022 Tung-Anh Nguyen, Tuan Dung Nguyen, Long Tan Le, Canh T. Dinh, Nguyen H. Tran

We show that the robustness of WAFL is more general than related approaches, and the generalization bound is robust to all adversarial distributions inside the Wasserstein ball (ambiguity set).

Domain Adaptation Federated Learning

A New Look and Convergence Rate of Federated Multi-Task Learning with Laplacian Regularization

2 code implementations14 Feb 2021 Canh T. Dinh, Tung T. Vu, Nguyen H. Tran, Minh N. Dao, Hongyu Zhang

Non-Independent and Identically Distributed (non- IID) data distribution among clients is considered as the key factor that degrades the performance of federated learning (FL).

Few-Shot Learning Multi-Task Learning +1

DONE: Distributed Approximate Newton-type Method for Federated Edge Learning

2 code implementations10 Dec 2020 Canh T. Dinh, Nguyen H. Tran, Tuan Dung Nguyen, Wei Bao, Amir Rezaei Balef, Bing B. Zhou, Albert Y. Zomaya

In this work, we propose DONE, a distributed approximate Newton-type algorithm with fast convergence rate for communication-efficient federated edge learning.

Edge-computing

Personalized Federated Learning with Moreau Envelopes

2 code implementations NeurIPS 2020 Canh T. Dinh, Nguyen H. Tran, Tuan Dung Nguyen

Federated learning (FL) is a decentralized and privacy-preserving machine learning technique in which a group of clients collaborate with a server to learn a global model without sharing clients' data.

Meta-Learning Personalized Federated Learning +2

Federated Learning over Wireless Networks: Convergence Analysis and Resource Allocation

4 code implementations29 Oct 2019 Canh T. Dinh, Nguyen H. Tran, Minh N. H. Nguyen, Choong Seon Hong, Wei Bao, Albert Y. Zomaya, Vincent Gramoli

There is an increasing interest in a fast-growing machine learning technique called Federated Learning, in which the model training is distributed over mobile user equipments (UEs), exploiting UEs' local computation and training data.

Federated Learning Privacy Preserving +1

Cannot find the paper you are looking for? You can Submit a new open access paper.