Search Results for author: Joel Jang

Found 12 papers, 9 papers with code

Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis

no code implementations24 May 2023 Sohee Yang, Jonghyeon Kim, Joel Jang, Seonghyeon Ye, Hyunji Lee, Minjoon Seo

Using the finding, we develop several variants of MI and increases the effectiveness of the best prompt selection method from 87. 79% to 94. 98%, measured as the ratio of the performance of the selected prompt to that of the optimal oracle prompt.

The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning

1 code implementation23 May 2023 Seungone Kim, Se June Joo, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo

Large Language Models (LLMs) have shown enhanced capabilities of solving novel tasks by reasoning step-by-step known as Chain-of-Thought (CoT) reasoning; how can we instill the same capability of reasoning step-by-step on unseen tasks into LMs that possess less than <100B parameters?

Few-Shot Learning

Exploring the Benefits of Training Expert Language Models over Instruction Tuning

1 code implementation7 Feb 2023 Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo

Recently, Language Models (LMs) instruction-tuned on multiple tasks, also known as multitask-prompted fine-tuning (MT), have shown the capability to generalize to unseen tasks.

Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners

1 code implementation6 Oct 2022 Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, Minjoon Seo

Meta-training, which fine-tunes the language model (LM) on various downstream tasks by maximizing the likelihood of the target label given the task instruction and input instance, has improved the zero-shot task generalization performance.

Language Modelling

Retrieval of Soft Prompt Enhances Zero-Shot Task Generalization

1 code implementation6 Oct 2022 Seonghyeon Ye, Joel Jang, Doyoung Kim, Yongrae Jo, Minjoon Seo

During zero-shot inference with language models (LMs), using hard prompts alone may not be able to fully describe the target task.

Retrieval

Knowledge Unlearning for Mitigating Privacy Risks in Language Models

1 code implementation4 Oct 2022 Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, Minjoon Seo

Pretrained Language Models (LMs) memorize a vast amount of knowledge during initial pretraining, including information that may violate the privacy of personal lives and identities.

Language Modelling

Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts

1 code implementation26 Sep 2022 Joel Jang, Seonghyeon Ye, Minjoon Seo

Previous work has shown that there exists a scaling law between the size of Language Models (LMs) and their zero-shot performance on different downstream NLP tasks.

Prompt Injection: Parameterization of Fixed Inputs

1 code implementation31 May 2022 Eunbi Choi, Yongrae Jo, Joel Jang, Minjoon Seo

Through these explorations, we show that PI can be a promising direction for conditioning language models, especially in scenarios with long and fixed prompts.

Semantic Parsing Zero-Shot Learning

TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models

1 code implementation29 Apr 2022 Joel Jang, Seonghyeon Ye, Changho Lee, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Minjoon Seo

Language Models (LMs) become outdated as the world changes; they often fail to perform tasks requiring recent factual information which was absent or different during training, a phenomenon called temporal misalignment.

Continual Learning

Towards Continual Knowledge Learning of Language Models

2 code implementations ICLR 2022 Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Stanley Jungkyu Choi, Minjoon Seo

By highlighting the critical causes of knowledge forgetting, we show that CKL is a challenging and important problem that helps us better understand and train ever-changing LMs.

Continual Learning Fact Checking +1

Learning to Balance with Incremental Learning

no code implementations1 Jan 2021 Joel Jang, Yoonjeon Kim, Jaewoo Kang

Classification tasks require balanced distribution of data in order to ensure the learner to be trained to generalize over all classes.

Incremental Learning

Sequential Targeting: an incremental learning approach for data imbalance in text classification

no code implementations20 Nov 2020 Joel Jang, Yoonjeon Kim, Kyoungho Choi, Sungho Suh

Classification tasks require a balanced distribution of data to ensure the learner to be trained to generalize over all classes.

General Classification Incremental Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.