no code implementations • EMNLP (sustainlp) 2020 • Seungtaek Choi, Myeongho Jeong, Jinyoung Yeo, Seung-won Hwang
This paper studies label augmentation for training dialogue response selection.
no code implementations • EMNLP 2020 • Seungtaek Choi, Haeju Park, Jinyoung Yeo, Seung-won Hwang
We aim to leverage human and machine intelligence together for attention supervision.
no code implementations • 8 Nov 2024 • Sangam Lee, Ryang Heo, SeongKu Kang, Susik Yoon, Jinyoung Yeo, Dongha Lee
HyPE leverages hierarchical category paths as explanation, progressing from broad to specific semantic categories.
no code implementations • 24 Oct 2024 • Seoyeon Kim, Huiseo Kim, Chanjun Park, Jinyoung Yeo, Dongha Lee
Code-switching (CS), a phenomenon where multilingual speakers alternate between languages in a discourse, can convey subtle cultural and linguistic nuances that can be otherwise lost in translation.
no code implementations • 17 Oct 2024 • Hyungjoo Chae, Namyoung Kim, Kai Tzu-iunn Ong, Minju Gwak, Gwanwoo Song, Jihoon Kim, Sunghwan Kim, Dongha Lee, Jinyoung Yeo
Large language models (LLMs) have recently gained much attention in building autonomous agents.
no code implementations • 2 Oct 2024 • Sunghwan Kim, Dongjin Kang, Taeyoon Kwon, Hyungjoo Chae, Jungsoo Won, Dongha Lee, Jinyoung Yeo
In this work, we introduce a new design for reliable evaluation of reward models, and to validate this, we construct RewardMATH, a benchmark that effectively represents the robustness of reward models in mathematical reasoning tasks.
no code implementations • 29 Sep 2024 • Hyungjoo Chae, Taeyoon Kwon, Seungjun Moon, Yongho Song, Dongjin Kang, Kai Tzu-iunn Ong, Beong-woo Kwak, SeongHyeon Bae, Seung-won Hwang, Jinyoung Yeo
This paper presents Coffee-Gym, a comprehensive RL environment for training models that provide feedback on code editing.
no code implementations • 31 Aug 2024 • Dongil Yang, Suyeon Lee, Minjin Kim, Jungsoo Won, Namyoung Kim, Dongha Lee, Jinyoung Yeo
Engagement between instructors and students plays a crucial role in enhancing students'academic performance.
1 code implementation • 24 Aug 2024 • Heejae Chon, Seonghyeon Lee, Jinyoung Yeo, Dongha Lee
Language models (LMs) have exhibited impressive abilities in generating codes from natural language requirements.
no code implementations • 22 Aug 2024 • Kai Tzu-iunn Ong, Taeyoon Kwon, Jinyoung Yeo
Guiding large language models with a selected set of human-authored demonstrations is a common practice for improving LLM applications.
no code implementations • 16 Aug 2024 • Tongyoung Kim, Soojin Yoon, SeongKu Kang, Jinyoung Yeo, Dongha Lee
Our in-depth analysis finds that there is a significant difference in the knowledge captured by the model from heterogeneous item indices and diverse input prompts, which can have a high potential for complementarity.
no code implementations • 12 Aug 2024 • Jieyong Kim, Hyunseo Kim, Hyunjin Cho, SeongKu Kang, Buru Chang, Jinyoung Yeo, Dongha Lee
Recent advancements in Large Language Models (LLMs) have demonstrated exceptional performance across a wide range of tasks, generating significant interest in their application to recommendation systems.
1 code implementation • 24 Jul 2024 • Yeongbin Seo, Dongha Lee, Jinyoung Yeo
Previous studies on continual knowledge learning (CKL) in large language models (LLMs) have predominantly focused on approaches such as regularization, architectural modifications, and rehearsal techniques to mitigate catastrophic forgetting.
1 code implementation • 22 Jul 2024 • Soojin Yoon, Sungho Ko, Tongyoung Kim, SeongKu Kang, Jinyoung Yeo, Dongha Lee
In this paper, we propose ERAlign, an unsupervised and robust cross-lingual EA pipeline that jointly performs Entity-level and Relation-level Alignment by neighbor triple matching strategy using semantic textual features of relations and entities.
1 code implementation • 3 Jul 2024 • Suyeon Lee, Sunghwan Kim, Minju Kim, Dongjin Kang, Dongil Yang, Harim Kim, Minseok Kang, Dayi Jung, Min Hee Kim, Seungbeen Lee, Kyoung-Mee Chung, Youngjae Yu, Dongha Lee, Jinyoung Yeo
To address this, we introduce Cactus, a multi-turn dialogue dataset that emulates real-life interactions using the goal-oriented and structured approach of Cognitive Behavioral Therapy (CBT).
no code implementations • 20 Jun 2024 • Seungbeen Lee, Seungwon Lim, Seungju Han, Giyeong Oh, Hyungjoo Chae, Jiwan Chung, Minju Kim, Beong-woo Kwak, Yeonsoo Lee, Dongha Lee, Jinyoung Yeo, Youngjae Yu
Recent advancements in Large Language Models (LLMs) have led to their adaptation in various domains as conversational agents.
1 code implementation • 18 Jun 2024 • Kwangwook Seo, Jinyoung Yeo, Dongha Lee
Our work focuses on building a plug-and-play table reasoner that can self-question the insightful knowledge and answer it by faithfully pinpointing evidence on the table to provide explainable guidance for the summarizer.
no code implementations • 16 Jun 2024 • Kai Tzu-iunn Ong, Namyoung Kim, Minju Gwak, Hyungjoo Chae, Taeyoon Kwon, Yohan Jo, Seung-won Hwang, Dongha Lee, Jinyoung Yeo
We present Theanine, a framework for LLM-based lifelong dialogue agents.
no code implementations • 3 Apr 2024 • Hyungjoo Chae, Yeonghyeon Kim, Seungone Kim, Kai Tzu-iunn Ong, Beong-woo Kwak, Moohyeon Kim, SeongHwan Kim, Taeyoon Kwon, Jiwan Chung, Youngjae Yu, Jinyoung Yeo
Also, we show that compared to natural language, pseudocode can better guide the reasoning of LMs, even though they are trained to follow natural language instructions.
1 code implementation • 7 Mar 2024 • Minjin Kim, Minju Kim, Hana Kim, Beong-woo Kwak, Soyeon Chun, Hyunseo Kim, SeongKu Kang, Youngjae Yu, Jinyoung Yeo, Dongha Lee
Our experimental results demonstrate that utterances in PEARL include more specific user preferences, show expertise in the target domain, and provide recommendations more relevant to the dialogue context than those in prior datasets.
1 code implementation • 5 Mar 2024 • Sungho Ko, Hyunjin Cho, Hyungjoo Chae, Jinyoung Yeo, Dongha Lee
Recent studies have investigated utilizing Knowledge Graphs (KGs) to enhance Quesetion Answering (QA) performance of Large Language Models (LLMs), yet structured KG verbalization remains challengin.
no code implementations • 3 Mar 2024 • Seo Hyun Kim, Keummin Ka, Yohan Jo, Seung-won Hwang, Dongha Lee, Jinyoung Yeo
To effectively construct memory, it is crucial to seamlessly connect past and present information, while also possessing the ability to forget obstructive information.
1 code implementation • 1 Mar 2024 • Jieyong Kim, Ryang Heo, Yongsik Seo, SeongKu Kang, Jinyoung Yeo, Dongha Lee
In the task of aspect sentiment quad prediction (ASQP), generative methods for predicting sentiment quads have shown promising results.
1 code implementation • 28 Feb 2024 • Seoyeon Kim, Kwangwook Seo, Hyungjoo Chae, Jinyoung Yeo, Dongha Lee
The results suggest that VerifiNER can successfully verify errors from existing models as a model-agnostic approach.
no code implementations • 27 Feb 2024 • Suyeon Lee, Jieun Kang, Harim Kim, Kyoung-Mee Chung, Dongha Lee, Jinyoung Yeo
The demand for conversational agents that provide mental health care is consistently increasing.
no code implementations • 20 Feb 2024 • Dongjin Kang, Sunghwan Kim, Taeyoon Kwon, Seungjun Moon, Hyunsouk Cho, Youngjae Yu, Dongha Lee, Jinyoung Yeo
Motivated by these, we explore the impact of the inherent preference in LLMs on providing emotional support, and consequently, we observe that exhibiting high preference for specific strategies hinders effective emotional support, aggravating its robustness in predicting the appropriate strategy.
no code implementations • 25 Jan 2024 • Hana Kim, Kai Tzu-iunn Ong, Seoyeon Kim, Dongha Lee, Jinyoung Yeo
As the pioneer of persona expansion in multi-session settings, our framework facilitates better response generation via human-like persona refinement.
1 code implementation • 12 Dec 2023 • Taeyoon Kwon, Kai Tzu-iunn Ong, Dongjin Kang, Seungjun Moon, Jeong Ryong Lee, Dosik Hwang, Yongsik Sim, Beomseok Sohn, Dongha Lee, Jinyoung Yeo
Specifically, we address the clinical reasoning for disease diagnosis, where the LLM generates diagnostic rationales providing its insight on presented patient data and the reasoning path towards the diagnosis, namely Clinical Chain-of-Thought (Clinical CoT).
no code implementations • 13 Nov 2023 • Seungjun Moon, Hyungjoo Chae, Yongho Song, Taeyoon Kwon, Dongjin Kang, Kai Tzu-iunn Ong, Seung-won Hwang, Jinyoung Yeo
Hence, the focus of our work is to leverage open-source code LLMs to generate helpful feedback with correct guidance for code editing.
2 code implementations • 21 Oct 2023 • Seonglae Cho, Yonggi Cho, HoonJae Lee, Myungha Jang, Jinyoung Yeo, Dongha Lee
In this paper, we present RTSUM, an unsupervised summarization framework that utilizes relation triples as the basic unit for summarization.
1 code implementation • 13 Oct 2023 • Hyungjoo Chae, Yongho Song, Kai Tzu-iunn Ong, Taeyoon Kwon, Minjin Kim, Youngjae Yu, Dongha Lee, Dongyeop Kang, Jinyoung Yeo
Hence, our focus is to facilitate such multi-hop reasoning over a dialogue context, namely dialogue chain-of-thought (CoT) reasoning.
1 code implementation • 7 Mar 2023 • Seungone Kim, Se June Joo, Yul Jang, Hyungjoo Chae, Jinyoung Yeo
To improve the correctness of the explanations, fine-tuning language models with explanation data is needed.
no code implementations • 2 Mar 2023 • Kai Tzu-iunn Ong, Hana Kim, Minjin Kim, Jinseong Jang, Beomseok Sohn, Yoon Seong Choi, Dosik Hwang, Seong Jae Hwang, Jinyoung Yeo
To address this, we present evidence-empowered transfer learning for AD diagnosis.
no code implementations • 24 Feb 2023 • Hyungjoo Chae, Minjin Kim, Chaehyeong Kim, Wonseok Jeong, Hyejoong Kim, Junmyung Lee, Jinyoung Yeo
In this paper, we propose Tutoring bot, a generative chatbot trained on a large scale of tutor-student conversations for English-language learning.
1 code implementation • 23 Oct 2022 • Minju Kim, Chaehyeong Kim, Yongho Song, Seung-won Hwang, Jinyoung Yeo
To build open-domain chatbots that are able to use diverse communicative skills, we propose a novel framework BotsTalk, where multiple agents grounded to the specific target skills participate in a conversation to automatically annotate multi-skill dialogues.
1 code implementation • COLING 2022 • Seungone Kim, Se June Joo, Hyungjoo Chae, Chaehyeong Kim, Seung-won Hwang, Jinyoung Yeo
In this paper, we propose to leverage the unique characteristics of dialogues sharing commonsense knowledge across participants, to resolve the difficulties in summarizing them.
Ranked #2 on Text Summarization on DialogSum
no code implementations • NAACL 2022 • Yu Jin Kim, Beong-woo Kwak, Youngwook Kim, Reinald Kim Amplayo, Seung-won Hwang, Jinyoung Yeo
Towards this goal, we propose to mitigate the loss of knowledge from the interference among the different knowledge sources, by developing a modular variant of the knowledge aggregation as a new zero-shot commonsense reasoning framework.
no code implementations • 11 Feb 2022 • Minju Kim, Beong-woo Kwak, Youngwook Kim, Hong-in Lee, Seung-won Hwang, Jinyoung Yeo
This paper introduces a simple yet effective data-centric approach for the task of improving persona-conditioned dialogue agents.
no code implementations • 26 Jan 2022 • Beong-woo Kwak, Youngwook Kim, Yu Jin Kim, Seung-won Hwang, Jinyoung Yeo
A traditional view of data acquisition is that, through iterations, knowledge from human labels and models is implicitly distilled to monotonically increase the accuracy and label consistency.
no code implementations • 18 Oct 2020 • Shin-woo Park, Byung Jun Bae, Jinyoung Yeo, Seung-won Hwang
Graph neural networks (GNNs) have been widely used in representation learning on graphs and achieved superior performance in tasks such as node classification.
no code implementations • IJCNLP 2019 • Kyungjae Lee, Sunghyun Park, Hojae Han, Jinyoung Yeo, Seung-won Hwang, Juho Lee
This paper studies the problem of supporting question answering in a new language with limited training resources.
no code implementations • ACL 2019 • Haeju Park, Jinyoung Yeo, Gengyu Wang, Seung-won Hwang
Transfer learning is effective for improving the performance of tasks that are related, and Multi-task learning (MTL) and Cross-lingual learning (CLL) are important instances.