no code implementations • 12 Jan 2024 • Yae Jee Cho, Luyang Liu, Zheng Xu, Aldi Fahrezi, Gauri Joshi
Foundation models (FMs) adapt well to specific domains or tasks with fine-tuning, and federated learning (FL) enables the potential for privacy-preserving fine-tuning of the FMs with on-device local data.
no code implementations • ICCV 2023 • Yae Jee Cho, Gauri Joshi, Dimitrios Dimitriadis
For both cross-device and cross-silo settings, we show that FedLabel outperforms other semi-supervised FL baselines by $8$-$24\%$, and even outperforms standard fully supervised FL baselines ($100\%$ labeled data) with only $5$-$20\%$ of labeled data.
no code implementations • 6 Feb 2023 • Yae Jee Cho, Pranay Sharma, Gauri Joshi, Zheng Xu, Satyen Kale, Tong Zhang
Federated Averaging (FedAvg) and its variants are the most popular optimization algorithms in federated learning (FL).
no code implementations • 30 May 2022 • Yae Jee Cho, Divyansh Jhunjhunwala, Tian Li, Virginia Smith, Gauri Joshi
We provide convergence guarantees for MaxFL and show that MaxFL achieves a $22$-$40\%$ and $18$-$50\%$ test accuracy improvement for the training clients and unseen clients respectively, compared to a wide range of FL modeling approaches, including those that tackle data heterogeneity, aim to incentivize clients, and learn personalized or fair models.
no code implementations • 27 Apr 2022 • Yae Jee Cho, Andre Manoel, Gauri Joshi, Robert Sim, Dimitrios Dimitriadis
In this work, we propose a novel ensemble knowledge transfer method named Fed-ET in which small models (different in architecture) are trained on clients, and used to train a larger model at the server.
no code implementations • 16 Sep 2021 • Yae Jee Cho, Jianyu Wang, Tarun Chiruvolu, Gauri Joshi
Personalized federated learning (FL) aims to train model(s) that can perform well for individual clients that are highly data and system heterogeneous.
no code implementations • 14 Dec 2020 • Yae Jee Cho, Samarth Gupta, Gauri Joshi, Osman Yağan
Due to communication constraints and intermittent client availability in federated learning, only a subset of clients can participate in each training round.
no code implementations • 3 Oct 2020 • Yae Jee Cho, Jianyu Wang, Gauri Joshi
Federated learning is a distributed optimization paradigm that enables a large number of resource-limited client nodes to cooperatively train a model without data sharing.