Search Results for author: Jaehoon Oh

Found 17 papers, 7 papers with code

FedFN: Feature Normalization for Alleviating Data Heterogeneity Problem in Federated Learning

no code implementations22 Nov 2023 Seongyoon Kim, Gihun Lee, Jaehoon Oh, Se-Young Yun

Additionally, we observe that as data heterogeneity increases, the gap between higher feature norms for observed classes, obtained from local models, and feature norms of unobserved classes widens, in contrast to the behavior of classifier weight norms.

Federated Learning

Cross-Modal Retrieval Meets Inference:Improving Zero-Shot Classification with Cross-Modal Retrieval

no code implementations29 Aug 2023 Seongha Eom, Namgyu Ho, Jaehoon Oh, Se-Young Yun

Given a query image, we harness the power of CLIP's cross-modal representations to retrieve relevant textual information from an external image-text pair dataset.

Cross-Modal Retrieval Image Classification +3

FedSOL: Stabilized Orthogonal Learning with Proximal Restrictions in Federated Learning

no code implementations24 Aug 2023 Gihun Lee, Minchan Jeong, Sangmook Kim, Jaehoon Oh, Se-Young Yun

FedSOL is designed to identify gradients of local objectives that are inherently orthogonal to directions affecting the proximal objective.

Federated Learning

Synergy with Translation Artifacts for Training and Inference in Multilingual Tasks

1 code implementation18 Oct 2022 Jaehoon Oh, Jongwoo Ko, Se-Young Yun

Translation has played a crucial role in improving the performance on multilingual tasks: (1) to generate the target language data from the source language data for training and (2) to generate the source language data from the target language data for inference.

Sentence Sentence Classification +1

Demystifying the Base and Novel Performances for Few-shot Class-incremental Learning

no code implementations18 Jun 2022 Jaehoon Oh, Se-Young Yun

Few-shot class-incremental learning (FSCIL) has addressed challenging real-world scenarios where unseen novel classes continually arrive with few samples.

Few-Shot Class-Incremental Learning Incremental Learning

How to Fine-tune Models with Few Samples: Update, Data Augmentation, and Test-time Augmentation

no code implementations13 May 2022 Yujin Kim, Jaehoon Oh, Sungnyun Kim, Se-Young Yun

Next, we show that data augmentation cannot guarantee few-shot performance improvement and investigate the effectiveness of data augmentation based on the intensity of augmentation.

Data Augmentation Few-Shot Learning +1

ReFine: Re-randomization before Fine-tuning for Cross-domain Few-shot Learning

no code implementations11 May 2022 Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun

Cross-domain few-shot learning (CD-FSL), where there are few target samples under extreme differences between source and target domains, has recently attracted huge attention.

cross-domain few-shot learning Transfer Learning

Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty

2 code implementations1 Feb 2022 Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun

This data enables self-supervised pre-training on the target domain, in addition to supervised pre-training on the source domain.

cross-domain few-shot learning

FedBABU: Toward Enhanced Representation for Federated Image Classification

1 code implementation ICLR 2022 Jaehoon Oh, Sangmook Kim, Se-Young Yun

Based on this observation, we propose a novel federated learning algorithm, coined FedBABU, which only updates the body of the model during federated training (i. e., the head is randomly initialized and never updated), and the head is fine-tuned for personalization during the evaluation process.

Classification Federated Learning +1

FedBABU: Towards Enhanced Representation for Federated Image Classification

3 code implementations4 Jun 2021 Jaehoon Oh, Sangmook Kim, Se-Young Yun

Based on this observation, we propose a novel federated learning algorithm, coined FedBABU, which only updates the body of the model during federated training (i. e., the head is randomly initialized and never updated), and the head is fine-tuned for personalization during the evaluation process.

Classification Federated Learning +1

Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation

1 code implementation19 May 2021 Taehyeon Kim, Jaehoon Oh, Nakyil Kim, Sangwook Cho, Se-Young Yun

From this observation, we consider an intuitive KD loss function, the mean squared error (MSE) between the logit vectors, so that the student model can directly learn the logit of the teacher model.

Knowledge Distillation Learning with noisy labels

Understanding Knowledge Distillation

no code implementations1 Jan 2021 Taehyeon Kim, Jaehoon Oh, Nakyil Kim, Sangwook Cho, Se-Young Yun

To verify this conjecture, we test an extreme logit learning model, where the KD is implemented with Mean Squared Error (MSE) between the student's logit and the teacher's logit.

Knowledge Distillation

Accurate and Fast Federated Learning via IID and Communication-Aware Grouping

no code implementations9 Dec 2020 Jin-woo Lee, Jaehoon Oh, Yooju Shin, Jae-Gil Lee, Se-Young Yoon

Federated learning has emerged as a new paradigm of collaborative machine learning; however, it has also faced several challenges such as non-independent and identically distributed(IID) data and high communication cost.

Federated Learning

TornadoAggregate: Accurate and Scalable Federated Learning via the Ring-Based Architecture

no code implementations6 Dec 2020 Jin-woo Lee, Jaehoon Oh, Sungsu Lim, Se-Young Yun, Jae-Gil Lee

Federated learning has emerged as a new paradigm of collaborative machine learning; however, many prior studies have used global aggregation along a star topology without much consideration of the communication scalability or the diurnal property relied on clients' local time variety.

Federated Learning

BOIL: Towards Representation Change for Few-shot Learning

1 code implementation ICLR 2021 Jaehoon Oh, Hyungjun Yoo, ChangHwan Kim, Se-Young Yun

It has recently been hypothesized that representation reuse, which makes little change in efficient representations, is the dominant factor in the performance of the meta-initialized model through MAML in contrast to representation change, which causes a significant change in representations.

Few-Shot Learning

SIPA: A Simple Framework for Efficient Networks

1 code implementation24 Apr 2020 Gihun Lee, Sangmin Bae, Jaehoon Oh, Se-Young Yun

With the success of deep learning in various fields and the advent of numerous Internet of Things (IoT) devices, it is essential to lighten models suitable for low-power devices.

Math

Spectrogram-channels u-net: a source separation model viewing each channel as the spectrogram of each source

no code implementations26 Oct 2018 Jaehoon Oh, Duyeon Kim, Se-Young Yun

The proposed model can be used for not only singing voice separation but also multi-instrument separation by changing only the number of output channels.

Information Retrieval Music Information Retrieval +3

Cannot find the paper you are looking for? You can Submit a new open access paper.