no code implementations • 25 Apr 2024 • Jiachen Liu, Zhiyu Wu, Jae-Won Chung, Fan Lai, Myungjin Lee, Mosharaf Chowdhury
The advent of large language models (LLMs) has transformed text-based services, enabling capabilities ranging from real-time translation to AI-driven chatbots.
no code implementations • 11 Apr 2024 • Faisal Ahmed, Myungjin Lee, Suresh Subramaniam, Motoharu Matsuura, Hiroshi Hasegawa, Shih-Chun Lin
Federated Learning (FL) has garnered significant interest recently due to its potential as an effective solution for tackling many challenges in diverse application scenarios, for example, data privacy in network edge traffic classification.
no code implementations • 26 Mar 2024 • Gustav A. Baumgart, Jaemin Shin, Ali Payani, Myungjin Lee, Ramana Rao Kompella
(3) However, algorithms such as FedDyn and SCAFFOLD are more prone to catastrophic failures without the support of additional techniques such as gradient clipping.
no code implementations • 17 May 2023 • Ganghua Wang, Ali Payani, Myungjin Lee, Ramana Kompella
While many mitigation strategies have been proposed in centralized learning, many of these methods are not directly applicable in federated learning, where data is privately stored on multiple clients.
1 code implementation • 9 May 2023 • Harshit Daga, Jaemin Shin, Dhruv Garg, Ada Gavrilovska, Myungjin Lee, Ramana Rao Kompella
We present Flame, a new system that provides flexibility of the topology configuration of distributed FL applications around the specifics of a particular deployment context, and is easily extensible to support new FL architectures.
no code implementations • 26 Jan 2023 • Alind Khare, Animesh Agrawal, Myungjin Lee, Alexey Tumanov
We propose SuperFed - an architectural framework that incurs $O(1)$ cost to co-train a large family of models in a federated fashion by leveraging weight-shared learning.
1 code implementation • 15 Jan 2023 • Fatih Ilhan, Ka-Ho Chow, Sihao Hu, Tiansheng Huang, Selim Tekin, Wenqi Wei, Yanzhao Wu, Myungjin Lee, Ramana Kompella, Hugo Latapie, Gaowen Liu, Ling Liu
Instead of having every sample go through all DNN layers during prediction, EENet learns an early exit scheduler, which can intelligently terminate the inference earlier for certain predictions, which the model has high confidence of early exit.
2 code implementations • 2 Jul 2017 • Morteza Kheirkhah, Myungjin Lee
In recent years several multipath data transport mechanisms, such as MPTCP and XMP, have been introduced to effectively exploit the path diversity of data center networks (DCNs).
Networking and Internet Architecture