Search Results for author: Yongrae Jo

Found 5 papers, 4 papers with code

FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets

1 code implementation20 Jul 2023 Seonghyeon Ye, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, Seungone Kim, Yongrae Jo, James Thorne, Juho Kim, Minjoon Seo

Evaluation of Large Language Models (LLMs) is challenging because instruction-following necessitates alignment with human values and the required set of skills varies depending on the instruction.

Instruction Following Language Modelling

Zero-Shot Dense Video Captioning by Jointly Optimizing Text and Moment

no code implementations5 Jul 2023 Yongrae Jo, Seongyun Lee, Aiden SJ Lee, Hyunji Lee, Hanseok Oh, Minjoon Seo

This is accomplished by introducing a soft moment mask that represents a temporal segment in the video and jointly optimizing it with the prefix parameters of a language model.

Language Modelling Text Generation +1

Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt

1 code implementation6 Oct 2022 Seonghyeon Ye, Joel Jang, Doyoung Kim, Yongrae Jo, Minjoon Seo

Enhancing the zero-shot performance of instruction-following models requires heavy computation, either by scaling the total number of training datasets or the model size.

Instruction Following Retrieval

Prompt Injection: Parameterization of Fixed Inputs

3 code implementations31 May 2022 Eunbi Choi, Yongrae Jo, Joel Jang, Minjoon Seo

Through these explorations, we show that PI can be a promising direction for conditioning language models, especially in scenarios with long and fixed prompts.

Semantic Parsing Zero-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.