no code implementations • 14 Nov 2023 • Yujin Kim, Jaehong Yoon, Seonghyeon Ye, Sangmin Bae, Namgyu Ho, Sung Ju Hwang, Se-Young Yun
The dynamic nature of knowledge in an ever-changing world presents challenges for language models trained on static data; the model in the real world often requires not only acquiring new knowledge but also overwriting outdated information into updated ones.
1 code implementation • 1 Nov 2023 • Yongjin Yang, Joonkee Kim, Yujin Kim, Namgyu Ho, James Thorne, Se-Young Yun
With the proliferation of social media, accurate detection of hate speech has become critical to ensure safety online.
no code implementations • 29 Aug 2023 • Seongha Eom, Namgyu Ho, Jaehoon Oh, Se-Young Yun
Given a query image, we harness the power of CLIP's cross-modal representations to retrieve relevant textual information from an external image-text pair dataset.
1 code implementation • 20 Dec 2022 • Namgyu Ho, Laura Schmid, Se-Young Yun
We evaluate our method on a wide range of public models and complex tasks.
1 code implementation • 30 Jun 2022 • Taehyeon Kim, Namgyu Ho, Donggyu Kim, Se-Young Yun
Historically, this challenge has been tackled using numerical weather prediction (NWP) models, grounded on physics-based simulations.
no code implementations • 11 May 2022 • Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun
Cross-domain few-shot learning (CD-FSL), where there are few target samples under extreme differences between source and target domains, has recently attracted huge attention.
2 code implementations • 1 Feb 2022 • Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun
This data enables self-supervised pre-training on the target domain, in addition to supervised pre-training on the source domain.