Search Results for author: Jinyoung Yeo

Found 43 papers, 16 papers with code

Can Code-Switched Texts Activate a Knowledge Switch in LLMs? A Case Study on English-Korean Code-Switching

no code implementations24 Oct 2024 Seoyeon Kim, Huiseo Kim, Chanjun Park, Jinyoung Yeo, Dongha Lee

Code-switching (CS), a phenomenon where multilingual speakers alternate between languages in a discourse, can convey subtle cultural and linguistic nuances that can be otherwise lost in translation.

Question Answering

Evaluating Robustness of Reward Models for Mathematical Reasoning

no code implementations2 Oct 2024 Sunghwan Kim, Dongjin Kang, Taeyoon Kwon, Hyungjoo Chae, Jungsoo Won, Dongha Lee, Jinyoung Yeo

In this work, we introduce a new design for reliable evaluation of reward models, and to validate this, we construct RewardMATH, a benchmark that effectively represents the robustness of reward models in mathematical reasoning tasks.

Math Mathematical Reasoning

Large Language Models Are Self-Taught Reasoners: Enhancing LLM Applications via Tailored Problem-Solving Demonstrations

no code implementations22 Aug 2024 Kai Tzu-iunn Ong, Taeyoon Kwon, Jinyoung Yeo

Guiding large language models with a selected set of human-authored demonstrations is a common practice for improving LLM applications.

Multiple-choice

SC-Rec: Enhancing Generative Retrieval with Self-Consistent Reranking for Sequential Recommendation

no code implementations16 Aug 2024 Tongyoung Kim, Soojin Yoon, SeongKu Kang, Jinyoung Yeo, Dongha Lee

Our in-depth analysis finds that there is a significant difference in the knowledge captured by the model from heterogeneous item indices and diverse input prompts, which can have a high potential for complementarity.

Retrieval Sequential Recommendation

Review-driven Personalized Preference Reasoning with Large Language Models for Recommendation

no code implementations12 Aug 2024 Jieyong Kim, Hyunseo Kim, Hyunjin Cho, SeongKu Kang, Buru Chang, Jinyoung Yeo, Dongha Lee

Recent advancements in Large Language Models (LLMs) have demonstrated exceptional performance across a wide range of tasks, generating significant interest in their application to recommendation systems.

Recommendation Systems

Train-Attention: Meta-Learning Where to Focus in Continual Knowledge Learning

1 code implementation24 Jul 2024 Yeongbin Seo, Dongha Lee, Jinyoung Yeo

Previous studies on continual knowledge learning (CKL) in large language models (LLMs) have predominantly focused on approaches such as regularization, architectural modifications, and rehearsal techniques to mitigate catastrophic forgetting.

Language Modelling Meta-Learning

Unsupervised Robust Cross-Lingual Entity Alignment via Neighbor Triple Matching with Entity and Relation Texts

1 code implementation22 Jul 2024 Soojin Yoon, Sungho Ko, Tongyoung Kim, SeongKu Kang, Jinyoung Yeo, Dongha Lee

In this paper, we propose ERAlign, an unsupervised and robust cross-lingual EA pipeline that jointly performs Entity-level and Relation-level Alignment by neighbor triple matching strategy using semantic textual features of relations and entities.

Entity Alignment Knowledge Graphs +1

Cactus: Towards Psychological Counseling Conversations using Cognitive Behavioral Theory

1 code implementation3 Jul 2024 Suyeon Lee, Sunghwan Kim, Minju Kim, Dongjin Kang, Dongil Yang, Harim Kim, Minseok Kang, Dayi Jung, Min Hee Kim, Seungbeen Lee, Kyoung-Mee Chung, Youngjae Yu, Dongha Lee, Jinyoung Yeo

To address this, we introduce Cactus, a multi-turn dialogue dataset that emulates real-life interactions using the goal-oriented and structured approach of Cognitive Behavioral Therapy (CBT).

Unveiling Implicit Table Knowledge with Question-Then-Pinpoint Reasoner for Insightful Table Summarization

1 code implementation18 Jun 2024 Kwangwook Seo, Jinyoung Yeo, Dongha Lee

Our work focuses on building a plug-and-play table reasoner that can self-question the insightful knowledge and answer it by faithfully pinpointing evidence on the table to provide explainable guidance for the summarizer.

Language Models as Compilers: Simulating Pseudocode Execution Improves Algorithmic Reasoning in Language Models

no code implementations3 Apr 2024 Hyungjoo Chae, Yeonghyeon Kim, Seungone Kim, Kai Tzu-iunn Ong, Beong-woo Kwak, Moohyeon Kim, SeongHwan Kim, Taeyoon Kwon, Jiwan Chung, Youngjae Yu, Jinyoung Yeo

Also, we show that compared to natural language, pseudocode can better guide the reasoning of LMs, even though they are trained to follow natural language instructions.

Pearl: A Review-driven Persona-Knowledge Grounded Conversational Recommendation Dataset

1 code implementation7 Mar 2024 Minjin Kim, Minju Kim, Hana Kim, Beong-woo Kwak, Soyeon Chun, Hyunseo Kim, SeongKu Kang, Youngjae Yu, Jinyoung Yeo, Dongha Lee

Our experimental results demonstrate that utterances in PEARL include more specific user preferences, show expertise in the target domain, and provide recommendations more relevant to the dialogue context than those in prior datasets.

Conversational Recommendation Recommendation Systems

Evidence-Focused Fact Summarization for Knowledge-Augmented Zero-Shot Question Answering

1 code implementation5 Mar 2024 Sungho Ko, Hyunjin Cho, Hyungjoo Chae, Jinyoung Yeo, Dongha Lee

Recent studies have investigated utilizing Knowledge Graphs (KGs) to enhance Quesetion Answering (QA) performance of Large Language Models (LLMs), yet structured KG verbalization remains challengin.

Knowledge Graphs Question Answering

Ever-Evolving Memory by Blending and Refining the Past

no code implementations3 Mar 2024 Seo Hyun Kim, Keummin Ka, Yohan Jo, Seung-won Hwang, Dongha Lee, Jinyoung Yeo

To effectively construct memory, it is crucial to seamlessly connect past and present information, while also possessing the ability to forget obstructive information.

Chatbot Response Generation

Self-Consistent Reasoning-based Aspect-Sentiment Quad Prediction with Extract-Then-Assign Strategy

1 code implementation1 Mar 2024 Jieyong Kim, Ryang Heo, Yongsik Seo, SeongKu Kang, Jinyoung Yeo, Dongha Lee

In the task of aspect sentiment quad prediction (ASQP), generative methods for predicting sentiment quads have shown promising results.

Can Large Language Models be Good Emotional Supporter? Mitigating Preference Bias on Emotional Support Conversation

no code implementations20 Feb 2024 Dongjin Kang, Sunghwan Kim, Taeyoon Kwon, Seungjun Moon, Hyunsouk Cho, Youngjae Yu, Dongha Lee, Jinyoung Yeo

Motivated by these, we explore the impact of the inherent preference in LLMs on providing emotional support, and consequently, we observe that exhibiting high preference for specific strategies hinders effective emotional support, aggravating its robustness in predicting the appropriate strategy.

Emotional Intelligence

Commonsense-augmented Memory Construction and Management in Long-term Conversations via Context-aware Persona Refinement

no code implementations25 Jan 2024 Hana Kim, Kai Tzu-iunn Ong, Seoyeon Kim, Dongha Lee, Jinyoung Yeo

As the pioneer of persona expansion in multi-session settings, our framework facilitates better response generation via human-like persona refinement.

Management Response Generation

Large Language Models are Clinical Reasoners: Reasoning-Aware Diagnosis Framework with Prompt-Generated Rationales

1 code implementation12 Dec 2023 Taeyoon Kwon, Kai Tzu-iunn Ong, Dongjin Kang, Seungjun Moon, Jeong Ryong Lee, Dosik Hwang, Yongsik Sim, Beomseok Sohn, Dongha Lee, Jinyoung Yeo

Specifically, we address the clinical reasoning for disease diagnosis, where the LLM generates diagnostic rationales providing its insight on presented patient data and the reasoning path towards the diagnosis, namely Clinical Chain-of-Thought (Clinical CoT).

Reading Comprehension

Coffee: Boost Your Code LLMs by Fixing Bugs with Feedback

no code implementations13 Nov 2023 Seungjun Moon, Hyungjoo Chae, Yongho Song, Taeyoon Kwon, Dongjin Kang, Kai Tzu-iunn Ong, Seung-won Hwang, Jinyoung Yeo

Hence, the focus of our work is to leverage open-source code LLMs to generate helpful feedback with correct guidance for code editing.

Program Synthesis

RTSUM: Relation Triple-based Interpretable Summarization with Multi-level Salience Visualization

2 code implementations21 Oct 2023 Seonglae Cho, Yonggi Cho, HoonJae Lee, Myungha Jang, Jinyoung Yeo, Dongha Lee

In this paper, we present RTSUM, an unsupervised summarization framework that utilizes relation triples as the basic unit for summarization.

Language Modelling Relation

CoTEVer: Chain of Thought Prompting Annotation Toolkit for Explanation Verification

1 code implementation7 Mar 2023 Seungone Kim, Se June Joo, Yul Jang, Hyungjoo Chae, Jinyoung Yeo

To improve the correctness of the explanations, fine-tuning language models with explanation data is needed.

TUTORING: Instruction-Grounded Conversational Agent for Language Learners

no code implementations24 Feb 2023 Hyungjoo Chae, Minjin Kim, Chaehyeong Kim, Wonseok Jeong, Hyejoong Kim, Junmyung Lee, Jinyoung Yeo

In this paper, we propose Tutoring bot, a generative chatbot trained on a large scale of tutor-student conversations for English-language learning.

Chatbot Multi-Task Learning +1

BotsTalk: Machine-sourced Framework for Automatic Curation of Large-scale Multi-skill Dialogue Datasets

1 code implementation23 Oct 2022 Minju Kim, Chaehyeong Kim, Yongho Song, Seung-won Hwang, Jinyoung Yeo

To build open-domain chatbots that are able to use diverse communicative skills, we propose a novel framework BotsTalk, where multiple agents grounded to the specific target skills participate in a conversation to automatically annotate multi-skill dialogues.

Mind the Gap! Injecting Commonsense Knowledge for Abstractive Dialogue Summarization

1 code implementation COLING 2022 Seungone Kim, Se June Joo, Hyungjoo Chae, Chaehyeong Kim, Seung-won Hwang, Jinyoung Yeo

In this paper, we propose to leverage the unique characteristics of dialogues sharing commonsense knowledge across participants, to resolve the difficulties in summarizing them.

Abstractive Dialogue Summarization Multi-Task Learning +1

Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning

no code implementations NAACL 2022 Yu Jin Kim, Beong-woo Kwak, Youngwook Kim, Reinald Kim Amplayo, Seung-won Hwang, Jinyoung Yeo

Towards this goal, we propose to mitigate the loss of knowledge from the interference among the different knowledge sources, by developing a modular variant of the knowledge aggregation as a new zero-shot commonsense reasoning framework.

Knowledge Graphs Transfer Learning

Dual Task Framework for Improving Persona-grounded Dialogue Dataset

no code implementations11 Feb 2022 Minju Kim, Beong-woo Kwak, Youngwook Kim, Hong-in Lee, Seung-won Hwang, Jinyoung Yeo

This paper introduces a simple yet effective data-centric approach for the task of improving persona-conditioned dialogue agents.

Benchmarking

TrustAL: Trustworthy Active Learning using Knowledge Distillation

no code implementations26 Jan 2022 Beong-woo Kwak, Youngwook Kim, Yu Jin Kim, Seung-won Hwang, Jinyoung Yeo

A traditional view of data acquisition is that, through iterations, knowledge from human labels and models is implicitly distilled to monotonically increase the accuracy and label consistency.

Active Learning Diversity +1

Meta-path Free Semi-supervised Learning for Heterogeneous Networks

no code implementations18 Oct 2020 Shin-woo Park, Byung Jun Bae, Jinyoung Yeo, Seung-won Hwang

Graph neural networks (GNNs) have been widely used in representation learning on graphs and achieved superior performance in tasks such as node classification.

Graph Neural Network Node Classification +1

Soft Representation Learning for Sparse Transfer

no code implementations ACL 2019 Haeju Park, Jinyoung Yeo, Gengyu Wang, Seung-won Hwang

Transfer learning is effective for improving the performance of tasks that are related, and Multi-task learning (MTL) and Cross-lingual learning (CLL) are important instances.

Multi-Task Learning Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.