Search Results for author: Dongjun Lee

Found 22 papers, 5 papers with code

Cross-Lingual Transformers for Neural Automatic Post-Editing

no code implementations WMT (EMNLP) 2020 Dongjun Lee

Finally, to address the over-correction problem, we select the final output among the PE outputs and the original MT sentence based on a sentence-level quality estimation.

Automatic Post-Editing Language Modeling +3

Consensus-aware Contrastive Learning for Group Recommendation

no code implementations18 Apr 2025 Soyoung Kim, Dongjun Lee, Jaekwang Kim

Group recommendation aims to provide personalized item suggestions to a group of users by reflecting their collective preferences.

Contrastive Learning

Learning to Contextualize Web Pages for Enhanced Decision Making by LLM Agents

no code implementations12 Mar 2025 Dongjun Lee, Juyong Lee, KyuYoung Kim, Jihoon Tack, Jinwoo Shin, Yee Whye Teh, Kimin Lee

In this work, we introduce LCoW, a framework for Learning language models to Contextualize complex Web pages into a more comprehensible form, thereby enhancing decision making by LLM agents.

Decision Making

Debiasing Classifiers by Amplifying Bias with Latent Diffusion and Large Language Models

no code implementations25 Nov 2024 Donggeun Ko, Dongjun Lee, Namjun Park, Wonkyeong Shim, Jaekwang Kim

Neural networks struggle with image classification when biases are learned and misleads correlations, affecting their generalization and performance.

Attribute Computational Efficiency +3

Introducing Spectral Attention for Long-Range Dependency in Time Series Forecasting

1 code implementation28 Oct 2024 Bong Gyun Kang, Dongjun Lee, HyunGi Kim, DoHyun Chung, Sungroh Yoon

To overcome these limitations, we introduce a fast and effective Spectral Attention mechanism, which preserves temporal correlations among samples and facilitates the handling of long-range information while maintaining the base model structure.

Time Series Time Series Forecasting

A Comprehensive Survey of Time Series Forecasting: Architectural Diversity and Open Challenges

no code implementations24 Oct 2024 Jongseon Kim, Hyungjoon Kim, HyunGi Kim, Dongjun Lee, Sungroh Yoon

By comparing and re-examining various deep learning models, we uncover new perspectives and presents the latest trends in time series forecasting, including the emergence of hybrid models, diffusion models, Mamba models, and foundation models.

Diversity Mamba +2

DiffInject: Revisiting Debias via Synthetic Data Generation using Diffusion-based Style Injection

no code implementations10 Jun 2024 Donggeun Ko, Sangwoo Jo, Dongjun Lee, Namjun Park, Jaekwang Kim

Dataset bias is a significant challenge in machine learning, where specific attributes, such as texture or color of the images are unintentionally learned resulting in detrimental performance.

Synthetic Data Generation

Gradient Alignment with Prototype Feature for Fully Test-time Adaptation

no code implementations14 Feb 2024 Juhyeon Shin, Jonghyun Lee, Saehyung Lee, MinJun Park, Dongjun Lee, Uiwon Hwang, Sungroh Yoon

In context of Test-time Adaptation(TTA), we propose a regularizer, dubbed Gradient Alignment with Prototype feature (GAP), which alleviates the inappropriate guidance from entropy minimization loss from misclassified pseudo label.

Pseudo Label Test-time Adaptation

Hierarchical Contrastive Learning with Multiple Augmentation for Sequential Recommendation

no code implementations7 Aug 2023 Dongjun Lee, Donggeun Ko, Jaekwang Kim

Thus, it is still possible to extend existing approaches to boost the effects of augmentation methods by using progressed structures with the combinations of multiple augmentation methods.

Contrastive Learning Sequential Recommendation

A Custom IC Layout Generation Engine Based on Dynamic Templates and Grids

no code implementations24 Jul 2022 Taeho Shin, Dongjun Lee, Dongwhee Kim, Gaeryun Sung, Wookjin Shin, Yunseong Jo, Hyungjoo Park, Jaeduk Han

The layout generation framework is applied to various design examples and generates DRC/LVS clean layouts automatically in multiple CMOS technologies.

Layout Generation

A Multi-Task Benchmark for Korean Legal Language Understanding and Judgement Prediction

1 code implementation10 Jun 2022 Wonseok Hwang, Dongjun Lee, Kyoungyeon Cho, Hanuhl Lee, Minjoon Seo

Here we present the first large-scale benchmark of Korean legal AI datasets, LBOX OPEN, that consists of one legal corpus, two classification tasks, two legal judgement prediction (LJP) tasks, and one summarization task.

Language Modelling

Learning multiple gaits of quadruped robot using hierarchical reinforcement learning

1 code implementation9 Dec 2021 Yunho Kim, Bukun Son, Dongjun Lee

There is a growing interest in learning a velocity command tracking controller of quadruped robot using reinforcement learning due to its robustness and scalability.

Hierarchical Reinforcement Learning reinforcement-learning +2

Intent-based Product Collections for E-commerce using Pretrained Language Models

no code implementations15 Oct 2021 Hiun Kim, Jisu Jeong, Kyung-Min Kim, Dongjun Lee, Hyun Dong Lee, Dongpil Seo, Jeeseung Han, Dong Wook Park, Ji Ae Heo, Rak Yeong Kim

In this paper, we use a pretrained language model (PLM) that leverages textual attributes of web-scale products to make intent-based product collections.

Language Modelling Sentence +1

IntelliCAT: Intelligent Machine Translation Post-Editing with Quality Estimation and Translation Suggestion

no code implementations ACL 2021 Dongjun Lee, Junhyeong Ahn, Heesoo Park, Jaemin Jo

We present IntelliCAT, an interactive translation interface with neural models that streamline the post-editing process on machine translation output.

Machine Translation Sentence +1

One-Shot Learning for Text-to-SQL Generation

no code implementations26 Apr 2019 Dongjun Lee, Jaesik Yoon, Jongyun Song, Sang-gil Lee, Sungroh Yoon

We show that our model outperforms state-of-the-art approaches for various text-to-SQL datasets in two aspects: 1) the SQL generation accuracy for the trained templates, and 2) the adaptability to the unseen SQL templates based on a single example without any additional training.

One-Shot Learning Text-To-SQL

Clause-Wise and Recursive Decoding for Complex and Cross-Domain Text-to-SQL Generation

no code implementations IJCNLP 2019 Dongjun Lee

We focus on the Spider dataset, a complex and cross-domain text-to-SQL task, which includes complex queries over multiple tables.

Text-To-SQL

Cannot find the paper you are looking for? You can Submit a new open access paper.