1 code implementation • NLP4ConvAI Association for Computational Linguistics Workshop 2024 • Janghoon Han, Dongkyu Lee, Joongbo Shin, Hyunkyung Bae, Jeesoo Bang, SeongHwan Kim, Stanley Jungkyu Choi, and Honglak Lee.
Recent studies have demonstrated significant improvements in selection tasks, and a considerable portion of this success is attributed to incorporating informative negative samples during training.
Ranked #1 on Conversational Response Selection on E-commerce
1 code implementation • 13 Jun 2024 • Janghoon Han, Changho Lee, Joongbo Shin, Stanley Jungkyu Choi, Honglak Lee, Kynghoon Bae
Subsequently, we assess the performance on unseen tasks in a language different from the one used for training.
1 code implementation • 25 Apr 2024 • Changho Lee, Janghoon Han, Seonghyeon Ye, Stanley Jungkyu Choi, Honglak Lee, Kyunghoon Bae
In this light, we introduce a simple yet effective task selection method that leverages instruction information alone to identify relevant tasks, optimizing instruction tuning for specific tasks.
1 code implementation • 6 Sep 2022 • Janghoon Han, Joongbo Shin, Hosung Song, Hyunjik Jo, Gyeonghun Kim, Yireun Kim, Stanley Jungkyu Choi
In the experiment, we investigate the effect of weighted negative sampling, post-training, and style transfer.
1 code implementation • 29 Apr 2022 • Joel Jang, Seonghyeon Ye, Changho Lee, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Minjoon Seo
Language Models (LMs) become outdated as the world changes; they often fail to perform tasks requiring recent factual information which was absent or different during training, a phenomenon called temporal misalignment.
2 code implementations • ICLR 2022 • Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Stanley Jungkyu Choi, Minjoon Seo
By highlighting the critical causes of knowledge forgetting, we show that CKL is a challenging and important problem that helps us better understand and train ever-changing LMs.
1 code implementation • NAACL 2021 • Janghoon Han, Taesuk Hong, Byoungjae Kim, Youngjoong Ko, Jungyun Seo
During the multi-turn response selection, BERT focuses on training the relationship between the context with multiple utterances and the response.
Ranked #1 on Conversational Response Selection on RRS