Search Results for author: Hyeonbin Hwang

Found 5 papers, 3 papers with code

Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards

1 code implementation16 Apr 2024 Hyeonbin Hwang, Doyoung Kim, Seungone Kim, Seonghyeon Ye, Minjoon Seo

Training on large amounts of rationales (i. e., CoT Fine-tuning) is effective at improving the reasoning capabilities of large language models (LLMs).

GSM8K Math

FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets

1 code implementation20 Jul 2023 Seonghyeon Ye, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, Seungone Kim, Yongrae Jo, James Thorne, Juho Kim, Minjoon Seo

Evaluation of Large Language Models (LLMs) is challenging because instruction-following necessitates alignment with human values and the required set of skills varies depending on the instruction.

Instruction Following Language Modelling

Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following

2 code implementations28 Feb 2023 Seonghyeon Ye, Hyeonbin Hwang, Sohee Yang, Hyeongu Yun, Yireun Kim, Minjoon Seo

In this paper, we present our finding that prepending a Task-Agnostic Prefix Prompt (TAPP) to the input improves the instruction-following ability of various Large Language Models (LLMs) during inference.

Instruction Following Zero-shot Generalization

MED-SE: Medical Entity Definition-based Sentence Embedding

no code implementations9 Dec 2022 Hyeonbin Hwang, Haanju Yoo, Yera Choi

We propose Medical Entity Definition-based Sentence Embedding (MED-SE), a novel unsupervised contrastive learning framework designed for clinical texts, which exploits the definitions of medical entities.

Contrastive Learning Semantic Textual Similarity +4

Cannot find the paper you are looking for? You can Submit a new open access paper.