no code implementations • ACL (WAT) 2021 • Heesoo Park, Dongjun Lee
We participated in all tasks on JPC2 and IT domain tasks on NICT-SAP.
no code implementations • WMT (EMNLP) 2020 • Dongjun Lee
Finally, to address the over-correction problem, we select the final output among the PE outputs and the original MT sentence based on a sentence-level quality estimation.
no code implementations • WMT (EMNLP) 2020 • Dongjun Lee
In this paper, we describe the Bering Lab’s submission to the WMT 2020 Shared Task on Quality Estimation (QE).
no code implementations • 18 Apr 2025 • Soyoung Kim, Dongjun Lee, Jaekwang Kim
Group recommendation aims to provide personalized item suggestions to a group of users by reflecting their collective preferences.
no code implementations • 12 Mar 2025 • Dongjun Lee, Juyong Lee, KyuYoung Kim, Jihoon Tack, Jinwoo Shin, Yee Whye Teh, Kimin Lee
In this work, we introduce LCoW, a framework for Learning language models to Contextualize complex Web pages into a more comprehensible form, thereby enhancing decision making by LLM agents.
no code implementations • 25 Nov 2024 • Donggeun Ko, Dongjun Lee, Namjun Park, Wonkyeong Shim, Jaekwang Kim
Neural networks struggle with image classification when biases are learned and misleads correlations, affecting their generalization and performance.
1 code implementation • 28 Oct 2024 • Bong Gyun Kang, Dongjun Lee, HyunGi Kim, DoHyun Chung, Sungroh Yoon
To overcome these limitations, we introduce a fast and effective Spectral Attention mechanism, which preserves temporal correlations among samples and facilitates the handling of long-range information while maintaining the base model structure.
no code implementations • 24 Oct 2024 • Jongseon Kim, Hyungjoon Kim, HyunGi Kim, Dongjun Lee, Sungroh Yoon
By comparing and re-examining various deep learning models, we uncover new perspectives and presents the latest trends in time series forecasting, including the emergence of hybrid models, diffusion models, Mamba models, and foundation models.
no code implementations • 10 Jun 2024 • Donggeun Ko, Sangwoo Jo, Dongjun Lee, Namjun Park, Jaekwang Kim
Dataset bias is a significant challenge in machine learning, where specific attributes, such as texture or color of the images are unintentionally learned resulting in detrimental performance.
no code implementations • 13 May 2024 • Dongjun Lee, Choongwon Park, Jaehyuk Kim, Heesoo Park
Thereafter, we generate various candidate SQL queries based on the refined schema and diverse prompts.
no code implementations • 14 Feb 2024 • Juhyeon Shin, Jonghyun Lee, Saehyung Lee, MinJun Park, Dongjun Lee, Uiwon Hwang, Sungroh Yoon
In context of Test-time Adaptation(TTA), we propose a regularizer, dubbed Gradient Alignment with Prototype feature (GAP), which alleviates the inappropriate guidance from entropy minimization loss from misclassified pseudo label.
1 code implementation • ICCV 2023 • Dongjun Lee, Seokwon Song, Jihee Suh, Joonmyung Choi, Sanghyeok Lee, Hyunwoo J. Kim
RPO leverages masked attention to prevent the internal representation shift in the pre-trained model.
Ranked #9 on
Prompt Engineering
on Caltech-101
no code implementations • 7 Aug 2023 • Dongjun Lee, Donggeun Ko, Jaekwang Kim
Thus, it is still possible to extend existing approaches to boost the effects of augmentation methods by using progressed structures with the combinations of multiple augmentation methods.
no code implementations • 24 Jul 2022 • Taeho Shin, Dongjun Lee, Dongwhee Kim, Gaeryun Sung, Wookjin Shin, Yunseong Jo, Hyungjoo Park, Jaeduk Han
The layout generation framework is applied to various design examples and generates DRC/LVS clean layouts automatically in multiple CMOS technologies.
1 code implementation • 10 Jun 2022 • Wonseok Hwang, Dongjun Lee, Kyoungyeon Cho, Hanuhl Lee, Minjoon Seo
Here we present the first large-scale benchmark of Korean legal AI datasets, LBOX OPEN, that consists of one legal corpus, two classification tasks, two legal judgement prediction (LJP) tasks, and one summarization task.
1 code implementation • 9 Dec 2021 • Yunho Kim, Bukun Son, Dongjun Lee
There is a growing interest in learning a velocity command tracking controller of quadruped robot using reinforcement learning due to its robustness and scalability.
Hierarchical Reinforcement Learning
reinforcement-learning
+2
no code implementations • 15 Oct 2021 • Hiun Kim, Jisu Jeong, Kyung-Min Kim, Dongjun Lee, Hyun Dong Lee, Dongpil Seo, Jeeseung Han, Dong Wook Park, Ji Ae Heo, Rak Yeong Kim
In this paper, we use a pretrained language model (PLM) that leverages textual attributes of web-scale products to make intent-based product collections.
no code implementations • ACL 2021 • Dongjun Lee, Junhyeong Ahn, Heesoo Park, Jaemin Jo
We present IntelliCAT, an interactive translation interface with neural models that streamline the post-editing process on machine translation output.
4 code implementations • 20 May 2021 • Sungjoon Park, Jihyung Moon, Sungdong Kim, Won Ik Cho, Jiyoon Han, Jangwon Park, Chisung Song, JunSeong Kim, Yongsook Song, Taehwan Oh, Joohong Lee, Juhyun Oh, Sungwon Lyu, Younghoon Jeong, InKwon Lee, Sangwoo Seo, Dongjun Lee, Hyunwoo Kim, Myeonghwa Lee, Seongbo Jang, Seungwon Do, Sunkyoung Kim, Kyungtae Lim, Jongwon Lee, Kyumin Park, Jamin Shin, Seonghyun Kim, Lucy Park, Alice Oh, Jung-Woo Ha, Kyunghyun Cho
We introduce Korean Language Understanding Evaluation (KLUE) benchmark.
no code implementations • 26 Apr 2019 • Dongjun Lee, Jaesik Yoon, Jongyun Song, Sang-gil Lee, Sungroh Yoon
We show that our model outperforms state-of-the-art approaches for various text-to-SQL datasets in two aspects: 1) the SQL generation accuracy for the trained templates, and 2) the adaptability to the unseen SQL templates based on a single example without any additional training.
no code implementations • IJCNLP 2019 • Dongjun Lee
We focus on the Spider dataset, a complex and cross-domain text-to-SQL task, which includes complex queries over multiple tables.
no code implementations • 8 Oct 2018 • Jinwoong Kim, Minkyu Kim, Heungseok Park, Ernar Kusdavletov, Dongjun Lee, Adrian Kim, Ji-Hoon Kim, Jung-Woo Ha, Nako Sung
Many hyperparameter optimization (HyperOpt) methods assume restricted computing resources and mainly focus on enhancing performance.