Search Results for author: Myungjin Lee

Found 8 papers, 3 papers with code

Andes: Defining and Enhancing Quality-of-Experience in LLM-Based Text Streaming Services

no code implementations25 Apr 2024 Jiachen Liu, Zhiyu Wu, Jae-Won Chung, Fan Lai, Myungjin Lee, Mosharaf Chowdhury

The advent of large language models (LLMs) has transformed text-based services, enabling capabilities ranging from real-time translation to AI-driven chatbots.

FedAuxHMTL: Federated Auxiliary Hard-Parameter Sharing Multi-Task Learning for Network Edge Traffic Classification

no code implementations11 Apr 2024 Faisal Ahmed, Myungjin Lee, Suresh Subramaniam, Motoharu Matsuura, Hiroshi Hasegawa, Shih-Chun Lin

Federated Learning (FL) has garnered significant interest recently due to its potential as an effective solution for tackling many challenges in diverse application scenarios, for example, data privacy in network edge traffic classification.

Federated Learning Multi-Task Learning +2

Not All Federated Learning Algorithms Are Created Equal: A Performance Evaluation Study

no code implementations26 Mar 2024 Gustav A. Baumgart, Jaemin Shin, Ali Payani, Myungjin Lee, Ramana Rao Kompella

(3) However, algorithms such as FedDyn and SCAFFOLD are more prone to catastrophic failures without the support of additional techniques such as gradient clipping.

Federated Learning

Mitigating Group Bias in Federated Learning: Beyond Local Fairness

no code implementations17 May 2023 Ganghua Wang, Ali Payani, Myungjin Lee, Ramana Kompella

While many mitigation strategies have been proposed in centralized learning, many of these methods are not directly applicable in federated learning, where data is privately stored on multiple clients.

Fairness Federated Learning

Flame: Simplifying Topology Extension in Federated Learning

1 code implementation9 May 2023 Harshit Daga, Jaemin Shin, Dhruv Garg, Ada Gavrilovska, Myungjin Lee, Ramana Rao Kompella

We present Flame, a new system that provides flexibility of the topology configuration of distributed FL applications around the specifics of a particular deployment context, and is easily extensible to support new FL architectures.

Federated Learning

SuperFed: Weight Shared Federated Learning

no code implementations26 Jan 2023 Alind Khare, Animesh Agrawal, Myungjin Lee, Alexey Tumanov

We propose SuperFed - an architectural framework that incurs $O(1)$ cost to co-train a large family of models in a federated fashion by leveraging weight-shared learning.

Federated Learning Privacy Preserving

Adaptive Deep Neural Network Inference Optimization with EENet

1 code implementation15 Jan 2023 Fatih Ilhan, Ka-Ho Chow, Sihao Hu, Tiansheng Huang, Selim Tekin, Wenqi Wei, Yanzhao Wu, Myungjin Lee, Ramana Kompella, Hugo Latapie, Gaowen Liu, Ling Liu

Instead of having every sample go through all DNN layers during prediction, EENet learns an early exit scheduler, which can intelligently terminate the inference earlier for certain predictions, which the model has high confidence of early exit.

Inference Optimization Scheduling +1

AMP: A Better Multipath TCP for Data Center Networks

2 code implementations2 Jul 2017 Morteza Kheirkhah, Myungjin Lee

In recent years several multipath data transport mechanisms, such as MPTCP and XMP, have been introduced to effectively exploit the path diversity of data center networks (DCNs).

Networking and Internet Architecture

Cannot find the paper you are looking for? You can Submit a new open access paper.