1 code implementation • 18 Feb 2025 • Sangkyu Lee, Janghoon Han, Hosung Song, Stanley Jungkyu Choi, Honglak Lee, Youngjae Yu
Direct Preference Optimization (DPO) demonstrates the advantage of aligning a large language model with human preference using only an offline dataset.
no code implementations • 30 Dec 2024 • Sungik Choi, Sungwoo Park, Jaehoon Lee, SeungHyun Kim, Stanley Jungkyu Choi, Moontae Lee
Specifically, by viewing the autoencoder of LDM as a downsampling-upsampling kernel, HFI measures the extent of aliasing, a distortion of high-frequency information that appears in the reconstructed image.
no code implementations • 6 Dec 2024 • LG AI Research, Soyoung An, Kyunghoon Bae, Eunbi Choi, Kibong Choi, Stanley Jungkyu Choi, Seokhee Hong, Junwon Hwang, Hyojin Jeon, Gerrard Jeongwon Jo, Hyunjik Jo, Jiyeon Jung, Yountae Jung, Hyosang Kim, Joonkee Kim, SeongHwan Kim, Soyeon Kim, Sunkyoung Kim, Yireun Kim, Yongil Kim, Youchul Kim, Edward Hwayoung Lee, Haeju Lee, Honglak Lee, Jinsik Lee, Kyungmin Lee, Woohyung Lim, Sangha Park, Sooyoun Park, Yongmin Park, Sihoon Yang, Heuiyeen Yeen, Hyeongu Yun
This technical report introduces the EXAONE 3. 5 instruction-tuned language models, developed and released by LG AI Research.
1 code implementation • NLP4ConvAI Association for Computational Linguistics Workshop 2024 • Janghoon Han, Dongkyu Lee, Joongbo Shin, Hyunkyung Bae, Jeesoo Bang, SeongHwan Kim, Stanley Jungkyu Choi, and Honglak Lee.
Recent studies have demonstrated significant improvements in selection tasks, and a considerable portion of this success is attributed to incorporating informative negative samples during training.
Ranked #1 on
Conversational Response Selection
on E-commerce
no code implementations • 7 Aug 2024 • LG AI Research, :, Soyoung An, Kyunghoon Bae, Eunbi Choi, Stanley Jungkyu Choi, Yemuk Choi, Seokhee Hong, Yeonjung Hong, Junwon Hwang, Hyojin Jeon, Gerrard Jeongwon Jo, Hyunjik Jo, Jiyeon Jung, Yountae Jung, Euisoon Kim, Hyosang Kim, Joonkee Kim, SeongHwan Kim, Soyeon Kim, Sunkyoung Kim, Yireun Kim, Youchul Kim, Edward Hwayoung Lee, Haeju Lee, Honglak Lee, Jinsik Lee, Kyungmin Lee, Moontae Lee, Seungjun Lee, Woohyung Lim, Sangha Park, Sooyoun Park, Yongmin Park, Boseong Seo, Sihoon Yang, Heuiyeen Yeen, Kyungjae Yoo, Hyeongu Yun
We introduce EXAONE 3. 0 instruction-tuned language model, the first open model in the family of Large Language Models (LLMs) developed by LG AI Research.
1 code implementation • 13 Jun 2024 • Janghoon Han, Changho Lee, Joongbo Shin, Stanley Jungkyu Choi, Honglak Lee, Kynghoon Bae
Subsequently, we assess the performance on unseen tasks in a language different from the one used for training.
1 code implementation • 25 Apr 2024 • Changho Lee, Janghoon Han, Seonghyeon Ye, Stanley Jungkyu Choi, Honglak Lee, Kyunghoon Bae
In this light, we introduce a simple yet effective task selection method that leverages instruction information alone to identify relevant tasks, optimizing instruction tuning for specific tasks.
no code implementations • 14 Mar 2023 • Hyungjun Lim, Younggwan Kim, Kiho Yeom, Eunjoo Seo, Hoodong Lee, Stanley Jungkyu Choi, Honglak Lee
Self-supervised learning method that provides generalized speech representations has recently received increasing attention.
1 code implementation • 6 Sep 2022 • Janghoon Han, Joongbo Shin, Hosung Song, Hyunjik Jo, Gyeonghun Kim, Yireun Kim, Stanley Jungkyu Choi
In the experiment, we investigate the effect of weighted negative sampling, post-training, and style transfer.
2 code implementations • ICLR 2022 • Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Stanley Jungkyu Choi, Minjoon Seo
By highlighting the critical causes of knowledge forgetting, we show that CKL is a challenging and important problem that helps us better understand and train ever-changing LMs.
2 code implementations • WS 2018 • Hwiyeol Jo, Stanley Jungkyu Choi
The method consists of 3 steps as follows: (i) Expanding 1 or more dimension(s) on all the word vectors, filling with their representative value.