Search Results for author: Dong-Jun Han

Found 18 papers, 2 papers with code

Asynchronous Federated Reinforcement Learning with Policy Gradient Updates: Algorithm Design and Convergence Analysis

no code implementations9 Apr 2024 Guangchen Lan, Dong-Jun Han, Abolfazl Hashemi, Vaneet Aggarwal, Christopher G. Brinton

Moreover, compared to synchronous FedPG, AFedPG improves the time complexity from $\mathcal{O}(\frac{t_{\max}}{N})$ to $\mathcal{O}(\frac{1}{\sum_{i=1}^{N} \frac{1}{t_{i}}})$, where $t_{i}$ denotes the time consumption in each iteration at the agent $i$, and $t_{\max}$ is the largest one.

Consistency-Guided Temperature Scaling Using Style and Content Information for Out-of-Domain Calibration

1 code implementation22 Feb 2024 Wonjeong Choi, Jungwuk Park, Dong-Jun Han, YoungHyun Park, Jaekyun Moon

In this paper, we propose consistency-guided temperature scaling (CTS), a new temperature scaling strategy that can significantly enhance the OOD calibration performance by providing mutual supervision among data samples in the source domains.

Decentralized Sporadic Federated Learning: A Unified Methodology with Generalized Convergence Guarantees

1 code implementation5 Feb 2024 Shahryar Zehtabi, Dong-Jun Han, Rohit Parasnis, Seyyedali Hosseinalipour, Christopher G. Brinton

Decentralized Federated Learning (DFL) has received significant recent research attention, capturing settings where both model updates and model aggregations -- the two key FL processes -- are conducted by the clients.

Federated Learning

Communication-Efficient Multimodal Federated Learning: Joint Modality and Client Selection

no code implementations30 Jan 2024 Liangqi Yuan, Dong-Jun Han, Su Wang, Devesh Upadhyay, Christopher G. Brinton

Multimodal federated learning (FL) aims to enrich model training in FL settings where clients are collecting measurements across multiple modalities.

Federated Learning

Only Send What You Need: Learning to Communicate Efficiently in Federated Multilingual Machine Translation

no code implementations15 Jan 2024 Yun-Wei Chu, Dong-Jun Han, Christopher G. Brinton

Federated learning (FL) is a promising approach for solving multilingual tasks, potentially enabling clients with their own language-specific data to collaboratively construct a high-quality neural machine translation (NMT) model.

Federated Learning Machine Translation +3

Cooperative Federated Learning over Ground-to-Satellite Integrated Networks: Joint Local Computation and Data Offloading

no code implementations23 Dec 2023 Dong-Jun Han, Seyyedali Hosseinalipour, David J. Love, Mung Chiang, Christopher G. Brinton

While network coverage maps continue to expand, many devices located in remote areas remain unconnected to terrestrial communication infrastructures, preventing them from getting access to the associated data-driven services.

Federated Learning Management

Submodel Partitioning in Hierarchical Federated Learning: Algorithm Design and Convergence Analysis

no code implementations27 Oct 2023 Wenzhi Fang, Dong-Jun Han, Christopher G. Brinton

Hierarchical federated learning (HFL) has demonstrated promising scalability advantages over the traditional "star-topology" architecture-based federated learning (FL).

Federated Learning

FedMFS: Federated Multimodal Fusion Learning with Selective Modality Communication

no code implementations10 Oct 2023 Liangqi Yuan, Dong-Jun Han, Vishnu Pandi Chellapandi, Stanislaw H. Żak, Christopher G. Brinton

Multimodal federated learning (FL) aims to enrich model training in FL settings where devices are collecting measurements across multiple modalities (e. g., sensors measuring pressure, motion, and other types of data).

Federated Learning

Test-Time Style Shifting: Handling Arbitrary Styles in Domain Generalization

no code implementations8 Jun 2023 Jungwuk Park, Dong-Jun Han, Soyeong Kim, Jaekyun Moon

In domain generalization (DG), the target domain is unknown when the model is being trained, and the trained model should successfully work on an arbitrary (and possibly unseen) target domain during inference.

Domain Generalization

SplitGP: Achieving Both Generalization and Personalization in Federated Learning

no code implementations16 Dec 2022 Dong-Jun Han, Do-Yeon Kim, Minseok Choi, Christopher G. Brinton, Jaekyun Moon

A fundamental challenge to providing edge-AI services is the need for a machine learning (ML) model that achieves personalization (i. e., to individual clients) and generalization (i. e., to unseen data) properties concurrently.

Federated Learning

Sageflow: Robust Federated Learning against Both Stragglers and Adversaries

no code implementations NeurIPS 2021 Jungwuk Park, Dong-Jun Han, Minseok Choi, Jaekyun Moon

While federated learning (FL) allows efficient model training with local data at edge devices, among major issues still to be resolved are: slow devices known as stragglers and malicious attacks launched by adversaries.

Federated Learning

Accelerating Federated Split Learning via Local-Loss-Based Training

no code implementations29 Sep 2021 Dong-Jun Han, Hasnain Irshad Bhatti, Jungmoon Lee, Jaekyun Moon

Federated learning (FL) operates based on model exchanges between the server and the clients, and suffers from significant communication as well as client-side computation burden.

Federated Learning

Few-Round Learning for Federated Learning

no code implementations NeurIPS 2021 YoungHyun Park, Dong-Jun Han, Do-Yeon Kim, Jun Seo, Jaekyun Moon

Of central issues that may limit a widespread adoption of FL is the significant communication resources required in the exchange of updated model parameters between the server and individual clients over many communication rounds.

Federated Learning Few-Shot Learning

FedMes: Speeding Up Federated Learning with Multiple Edge Servers

no code implementations1 Jan 2021 Dong-Jun Han, Minseok Choi, Jungwuk Park, Jaekyun Moon

Our key idea is to utilize the devices located in the overlapping areas between the coverage of edge servers; in the model-downloading stage, the devices in the overlapping areas receive multiple models from different edge servers, take the average of the received models, and then update the model with their local data.

Federated Learning

Sself: Robust Federated Learning against Stragglers and Adversaries

no code implementations1 Jan 2021 Jungwuk Park, Dong-Jun Han, Minseok Choi, Jaekyun Moon

While federated learning allows efficient model training with local data at edge devices, two major issues that need to be resolved are: slow devices known as stragglers and malicious attacks launched by adversaries.

Data Poisoning Federated Learning

Communication-Computation Efficient Secure Aggregation for Federated Learning

no code implementations10 Dec 2020 Beongjun Choi, Jy-yong Sohn, Dong-Jun Han, Jaekyun Moon

Through extensive real-world experiments, we demonstrate that our scheme, using only $20 \sim 30\%$ of the resources required in the conventional scheme, maintains virtually the same levels of reliability and data privacy in practical federated learning systems.

Federated Learning Privacy Preserving

Election Coding for Distributed Learning: Protecting SignSGD against Byzantine Attacks

no code implementations NeurIPS 2020 Jy-yong Sohn, Dong-Jun Han, Beongjun Choi, Jaekyun Moon

Recent advances in large-scale distributed learning algorithms have enabled communication-efficient training via SignSGD.

Cannot find the paper you are looking for? You can Submit a new open access paper.