Search Results for author: Jinyoung Yeo

Found 27 papers, 6 papers with code

Language Models as Compilers: Simulating Pseudocode Execution Improves Algorithmic Reasoning in Language Models

no code implementations3 Apr 2024 Hyungjoo Chae, Yeonghyeon Kim, Seungone Kim, Kai Tzu-iunn Ong, Beong-woo Kwak, Moohyeon Kim, SeongHwan Kim, Taeyoon Kwon, Jiwan Chung, Youngjae Yu, Jinyoung Yeo

Also, we show that compared to natural language, pseudocode can better guide the reasoning of LMs, even though they are trained to follow natural language instructions.

Pearl: A Review-driven Persona-Knowledge Grounded Conversational Recommendation Dataset

no code implementations7 Mar 2024 Minjin Kim, Minju Kim, Hana Kim, Beong-woo Kwak, Soyeon Chun, Hyunseo Kim, SeongKu Kang, Youngjae Yu, Jinyoung Yeo, Dongha Lee

Our experimental results demonstrate that utterances in PEARL include more specific user preferences, show expertise in the target domain, and provide recommendations more relevant to the dialogue context than those in prior datasets.

Recommendation Systems

Evidence-Focused Fact Summarization for Knowledge-Augmented Zero-Shot Question Answering

no code implementations5 Mar 2024 Sungho Ko, Hyunjin Cho, Hyungjoo Chae, Jinyoung Yeo, Dongha Lee

Recent studies have investigated utilizing Knowledge Graphs (KGs) to enhance Quesetion Answering (QA) performance of Large Language Models (LLMs), yet structured KG verbalization remains challengin.

Knowledge Graphs Question Answering

Ever-Evolving Memory by Blending and Refining the Past

no code implementations3 Mar 2024 Seo Hyun Kim, Keummin Ka, Yohan Jo, Seung-won Hwang, Dongha Lee, Jinyoung Yeo

To effectively construct memory, it is crucial to seamlessly connect past and present information, while also possessing the ability to forget obstructive information.

Chatbot Response Generation

Self-Consistent Reasoning-based Aspect-Sentiment Quad Prediction with Extract-Then-Assign Strategy

no code implementations1 Mar 2024 Jieyong Kim, Ryang Heo, Yongsik Seo, SeongKu Kang, Jinyoung Yeo, Dongha Lee

In the task of aspect sentiment quad prediction (ASQP), generative methods for predicting sentiment quads have shown promising results.

Can Large Language Models be Good Emotional Supporter? Mitigating Preference Bias on Emotional Support Conversation

no code implementations20 Feb 2024 Dongjin Kang, Sunghwan Kim, Taeyoon Kwon, Seungjun Moon, Hyunsouk Cho, Youngjae Yu, Dongha Lee, Jinyoung Yeo

Motivated by these, we explore the impact of the inherent preference in LLMs on providing emotional support, and consequently, we observe that exhibiting high preference for specific strategies hinders effective emotional support, aggravating its robustness in predicting the appropriate strategy.

Emotional Intelligence

Commonsense-augmented Memory Construction and Management in Long-term Conversations via Context-aware Persona Refinement

no code implementations25 Jan 2024 Hana Kim, Kai Tzu-iunn Ong, Seoyeon Kim, Dongha Lee, Jinyoung Yeo

As the pioneer of persona expansion in multi-session settings, our framework facilitates better response generation via human-like persona refinement.

Management Response Generation

Large Language Models are Clinical Reasoners: Reasoning-Aware Diagnosis Framework with Prompt-Generated Rationales

no code implementations12 Dec 2023 Taeyoon Kwon, Kai Tzu-iunn Ong, Dongjin Kang, Seungjun Moon, Jeong Ryong Lee, Dosik Hwang, Yongsik Sim, Beomseok Sohn, Dongha Lee, Jinyoung Yeo

Specifically, we address the clinical reasoning for disease diagnosis, where the LLM generates diagnostic rationales providing its insight on presented patient data and the reasoning path towards the diagnosis, namely Clinical Chain-of-Thought (Clinical CoT).

Reading Comprehension

Coffee: Boost Your Code LLMs by Fixing Bugs with Feedback

no code implementations13 Nov 2023 Seungjun Moon, Hyungjoo Chae, Yongho Song, Taeyoon Kwon, Dongjin Kang, Kai Tzu-iunn Ong, Seung-won Hwang, Jinyoung Yeo

Hence, the focus of our work is to leverage open-source code LLMs to generate helpful feedback with correct guidance for code editing.

Program Synthesis

RTSUM: Relation Triple-based Interpretable Summarization with Multi-level Salience Visualization

2 code implementations21 Oct 2023 Seonglae Cho, Yonggi Cho, HoonJae Lee, Myungha Jang, Jinyoung Yeo, Dongha Lee

In this paper, we present RTSUM, an unsupervised summarization framework that utilizes relation triples as the basic unit for summarization.

Language Modelling Relation

CoTEVer: Chain of Thought Prompting Annotation Toolkit for Explanation Verification

1 code implementation7 Mar 2023 Seungone Kim, Se June Joo, Yul Jang, Hyungjoo Chae, Jinyoung Yeo

To improve the correctness of the explanations, fine-tuning language models with explanation data is needed.

TUTORING: Instruction-Grounded Conversational Agent for Language Learners

no code implementations24 Feb 2023 Hyungjoo Chae, Minjin Kim, Chaehyeong Kim, Wonseok Jeong, Hyejoong Kim, Junmyung Lee, Jinyoung Yeo

In this paper, we propose Tutoring bot, a generative chatbot trained on a large scale of tutor-student conversations for English-language learning.

Chatbot Multi-Task Learning +1

BotsTalk: Machine-sourced Framework for Automatic Curation of Large-scale Multi-skill Dialogue Datasets

1 code implementation23 Oct 2022 Minju Kim, Chaehyeong Kim, Yongho Song, Seung-won Hwang, Jinyoung Yeo

To build open-domain chatbots that are able to use diverse communicative skills, we propose a novel framework BotsTalk, where multiple agents grounded to the specific target skills participate in a conversation to automatically annotate multi-skill dialogues.

Mind the Gap! Injecting Commonsense Knowledge for Abstractive Dialogue Summarization

1 code implementation COLING 2022 Seungone Kim, Se June Joo, Hyungjoo Chae, Chaehyeong Kim, Seung-won Hwang, Jinyoung Yeo

In this paper, we propose to leverage the unique characteristics of dialogues sharing commonsense knowledge across participants, to resolve the difficulties in summarizing them.

Abstractive Dialogue Summarization Multi-Task Learning +1

Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning

no code implementations NAACL 2022 Yu Jin Kim, Beong-woo Kwak, Youngwook Kim, Reinald Kim Amplayo, Seung-won Hwang, Jinyoung Yeo

Towards this goal, we propose to mitigate the loss of knowledge from the interference among the different knowledge sources, by developing a modular variant of the knowledge aggregation as a new zero-shot commonsense reasoning framework.

Knowledge Graphs Transfer Learning

Dual Task Framework for Improving Persona-grounded Dialogue Dataset

no code implementations11 Feb 2022 Minju Kim, Beong-woo Kwak, Youngwook Kim, Hong-in Lee, Seung-won Hwang, Jinyoung Yeo

This paper introduces a simple yet effective data-centric approach for the task of improving persona-conditioned dialogue agents.

Benchmarking

TrustAL: Trustworthy Active Learning using Knowledge Distillation

no code implementations26 Jan 2022 Beong-woo Kwak, Youngwook Kim, Yu Jin Kim, Seung-won Hwang, Jinyoung Yeo

A traditional view of data acquisition is that, through iterations, knowledge from human labels and models is implicitly distilled to monotonically increase the accuracy and label consistency.

Active Learning Knowledge Distillation

Meta-path Free Semi-supervised Learning for Heterogeneous Networks

no code implementations18 Oct 2020 Shin-woo Park, Byung Jun Bae, Jinyoung Yeo, Seung-won Hwang

Graph neural networks (GNNs) have been widely used in representation learning on graphs and achieved superior performance in tasks such as node classification.

Node Classification Representation Learning

Soft Representation Learning for Sparse Transfer

no code implementations ACL 2019 Haeju Park, Jinyoung Yeo, Gengyu Wang, Seung-won Hwang

Transfer learning is effective for improving the performance of tasks that are related, and Multi-task learning (MTL) and Cross-lingual learning (CLL) are important instances.

Multi-Task Learning Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.