Search Results for author: Minchan Jeong

Found 7 papers, 5 papers with code

Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning

1 code implementation13 Feb 2024 Haeju Lee, Minchan Jeong, Se-Young Yun, Kee-Eung Kim

We argue that when we extract knowledge from source tasks via training source prompts, we need to consider this correlation among source tasks for better transfer to target tasks.

Transfer Learning

FedSOL: Stabilized Orthogonal Learning with Proximal Restrictions in Federated Learning

no code implementations24 Aug 2023 Gihun Lee, Minchan Jeong, Sangmook Kim, Jaehoon Oh, Se-Young Yun

FedSOL is designed to identify gradients of local objectives that are inherently orthogonal to directions affecting the proximal objective.

Federated Learning

Toward Risk-based Optimistic Exploration for Cooperative Multi-Agent Reinforcement Learning

no code implementations3 Mar 2023 Jihwan Oh, Joonkee Kim, Minchan Jeong, Se-Young Yun

In this paper, we present a risk-based exploration that leads to collaboratively optimistic behavior by shifting the sampling region of distribution.

Distributional Reinforcement Learning Multi-agent Reinforcement Learning +2

Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective

1 code implementation3 Feb 2023 Jongwoo Ko, Seungjoon Park, Minchan Jeong, Sukjin Hong, Euijai Ahn, Du-Seong Chang, Se-Young Yun

Knowledge distillation (KD) is a highly promising method for mitigating the computational problems of pre-trained language models (PLMs).

Knowledge Distillation

Preservation of the Global Knowledge by Not-True Distillation in Federated Learning

2 code implementations6 Jun 2021 Gihun Lee, Minchan Jeong, Yongjin Shin, Sangmin Bae, Se-Young Yun

In federated learning, a strong global model is collaboratively learned by aggregating clients' locally trained models.

Continual Learning Federated Learning +1

Robust Streaming PCA

1 code implementation8 Feb 2019 Daniel Bienstock, Minchan Jeong, Apurv Shukla, Se-Young Yun

We consider streaming principal component analysis when the stochastic data-generating model is subject to perturbations.

Cannot find the paper you are looking for? You can Submit a new open access paper.