Search Results for author: Namgyu Ho

Found 7 papers, 4 papers with code

Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Models

no code implementations14 Nov 2023 Yujin Kim, Jaehong Yoon, Seonghyeon Ye, Sangmin Bae, Namgyu Ho, Sung Ju Hwang, Se-Young Yun

The dynamic nature of knowledge in an ever-changing world presents challenges for language models trained on static data; the model in the real world often requires not only acquiring new knowledge but also overwriting outdated information into updated ones.

Continual Learning Question Answering +1

HARE: Explainable Hate Speech Detection with Step-by-Step Reasoning

1 code implementation1 Nov 2023 Yongjin Yang, Joonkee Kim, Yujin Kim, Namgyu Ho, James Thorne, Se-Young Yun

With the proliferation of social media, accurate detection of hate speech has become critical to ensure safety online.

Hate Speech Detection

Cross-Modal Retrieval Meets Inference:Improving Zero-Shot Classification with Cross-Modal Retrieval

no code implementations29 Aug 2023 Seongha Eom, Namgyu Ho, Jaehoon Oh, Se-Young Yun

Given a query image, we harness the power of CLIP's cross-modal representations to retrieve relevant textual information from an external image-text pair dataset.

Cross-Modal Retrieval Image Classification +3

Large Language Models Are Reasoning Teachers

1 code implementation20 Dec 2022 Namgyu Ho, Laura Schmid, Se-Young Yun

We evaluate our method on a wide range of public models and complex tasks.

Benchmark Dataset for Precipitation Forecasting by Post-Processing the Numerical Weather Prediction

1 code implementation30 Jun 2022 Taehyeon Kim, Namgyu Ho, Donggyu Kim, Se-Young Yun

Historically, this challenge has been tackled using numerical weather prediction (NWP) models, grounded on physics-based simulations.

Computational Efficiency Precipitation Forecasting

ReFine: Re-randomization before Fine-tuning for Cross-domain Few-shot Learning

no code implementations11 May 2022 Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun

Cross-domain few-shot learning (CD-FSL), where there are few target samples under extreme differences between source and target domains, has recently attracted huge attention.

cross-domain few-shot learning Transfer Learning

Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty

2 code implementations1 Feb 2022 Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun

This data enables self-supervised pre-training on the target domain, in addition to supervised pre-training on the source domain.

cross-domain few-shot learning

Cannot find the paper you are looking for? You can Submit a new open access paper.