Search Results for author: Beong-woo Kwak

Found 5 papers, 0 papers with code

Language Models as Compilers: Simulating Pseudocode Execution Improves Algorithmic Reasoning in Language Models

no code implementations3 Apr 2024 Hyungjoo Chae, Yeonghyeon Kim, Seungone Kim, Kai Tzu-iunn Ong, Beong-woo Kwak, Moohyeon Kim, SeongHwan Kim, Taeyoon Kwon, Jiwan Chung, Youngjae Yu, Jinyoung Yeo

Also, we show that compared to natural language, pseudocode can better guide the reasoning of LMs, even though they are trained to follow natural language instructions.

Pearl: A Review-driven Persona-Knowledge Grounded Conversational Recommendation Dataset

no code implementations7 Mar 2024 Minjin Kim, Minju Kim, Hana Kim, Beong-woo Kwak, Soyeon Chun, Hyunseo Kim, SeongKu Kang, Youngjae Yu, Jinyoung Yeo, Dongha Lee

Our experimental results demonstrate that utterances in PEARL include more specific user preferences, show expertise in the target domain, and provide recommendations more relevant to the dialogue context than those in prior datasets.

Recommendation Systems

Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning

no code implementations NAACL 2022 Yu Jin Kim, Beong-woo Kwak, Youngwook Kim, Reinald Kim Amplayo, Seung-won Hwang, Jinyoung Yeo

Towards this goal, we propose to mitigate the loss of knowledge from the interference among the different knowledge sources, by developing a modular variant of the knowledge aggregation as a new zero-shot commonsense reasoning framework.

Knowledge Graphs Transfer Learning

Dual Task Framework for Improving Persona-grounded Dialogue Dataset

no code implementations11 Feb 2022 Minju Kim, Beong-woo Kwak, Youngwook Kim, Hong-in Lee, Seung-won Hwang, Jinyoung Yeo

This paper introduces a simple yet effective data-centric approach for the task of improving persona-conditioned dialogue agents.

Benchmarking

TrustAL: Trustworthy Active Learning using Knowledge Distillation

no code implementations26 Jan 2022 Beong-woo Kwak, Youngwook Kim, Yu Jin Kim, Seung-won Hwang, Jinyoung Yeo

A traditional view of data acquisition is that, through iterations, knowledge from human labels and models is implicitly distilled to monotonically increase the accuracy and label consistency.

Active Learning Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.