Search Results for author: Taeyoon Kwon

Found 9 papers, 2 papers with code

Evaluating Robustness of Reward Models for Mathematical Reasoning

no code implementations2 Oct 2024 Sunghwan Kim, Dongjin Kang, Taeyoon Kwon, Hyungjoo Chae, Jungsoo Won, Dongha Lee, Jinyoung Yeo

In this work, we introduce a new design for reliable evaluation of reward models, and to validate this, we construct RewardMATH, a benchmark that effectively represents the robustness of reward models in mathematical reasoning tasks.

Math Mathematical Reasoning

Large Language Models Are Self-Taught Reasoners: Enhancing LLM Applications via Tailored Problem-Solving Demonstrations

no code implementations22 Aug 2024 Kai Tzu-iunn Ong, Taeyoon Kwon, Jinyoung Yeo

Guiding large language models with a selected set of human-authored demonstrations is a common practice for improving LLM applications.

Multiple-choice

Language Models as Compilers: Simulating Pseudocode Execution Improves Algorithmic Reasoning in Language Models

no code implementations3 Apr 2024 Hyungjoo Chae, Yeonghyeon Kim, Seungone Kim, Kai Tzu-iunn Ong, Beong-woo Kwak, Moohyeon Kim, SeongHwan Kim, Taeyoon Kwon, Jiwan Chung, Youngjae Yu, Jinyoung Yeo

Also, we show that compared to natural language, pseudocode can better guide the reasoning of LMs, even though they are trained to follow natural language instructions.

Can Large Language Models be Good Emotional Supporter? Mitigating Preference Bias on Emotional Support Conversation

no code implementations20 Feb 2024 Dongjin Kang, Sunghwan Kim, Taeyoon Kwon, Seungjun Moon, Hyunsouk Cho, Youngjae Yu, Dongha Lee, Jinyoung Yeo

Motivated by these, we explore the impact of the inherent preference in LLMs on providing emotional support, and consequently, we observe that exhibiting high preference for specific strategies hinders effective emotional support, aggravating its robustness in predicting the appropriate strategy.

Emotional Intelligence

Large Language Models are Clinical Reasoners: Reasoning-Aware Diagnosis Framework with Prompt-Generated Rationales

1 code implementation12 Dec 2023 Taeyoon Kwon, Kai Tzu-iunn Ong, Dongjin Kang, Seungjun Moon, Jeong Ryong Lee, Dosik Hwang, Yongsik Sim, Beomseok Sohn, Dongha Lee, Jinyoung Yeo

Specifically, we address the clinical reasoning for disease diagnosis, where the LLM generates diagnostic rationales providing its insight on presented patient data and the reasoning path towards the diagnosis, namely Clinical Chain-of-Thought (Clinical CoT).

Reading Comprehension

Coffee: Boost Your Code LLMs by Fixing Bugs with Feedback

no code implementations13 Nov 2023 Seungjun Moon, Hyungjoo Chae, Yongho Song, Taeyoon Kwon, Dongjin Kang, Kai Tzu-iunn Ong, Seung-won Hwang, Jinyoung Yeo

Hence, the focus of our work is to leverage open-source code LLMs to generate helpful feedback with correct guidance for code editing.

Program Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.