Search Results for author: Sohee Yang

Found 16 papers, 12 papers with code

Do Large Language Models Latently Perform Multi-Hop Reasoning?

no code implementations26 Feb 2024 Sohee Yang, Elena Gribovskaya, Nora Kassner, Mor Geva, Sebastian Riedel

We find strong evidence of latent multi-hop reasoning for the prompts of certain relation types, with the reasoning pathway used in more than 80% of the prompts.

Exploring the Practicality of Generative Retrieval on Dynamic Corpora

no code implementations27 May 2023 Soyoung Yoon, Chaeeun Kim, Hyunji Lee, Joel Jang, Sohee Yang, Minjoon Seo

Benchmarking the performance of information retrieval (IR) methods are mostly conducted with a fixed set of documents (static corpora); in realistic scenarios, this is rarely the case and the document to be retrieved are constantly updated and added.

Benchmarking Information Retrieval +1

Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis

1 code implementation24 May 2023 Sohee Yang, Jonghyeon Kim, Joel Jang, Seonghyeon Ye, Hyunji Lee, Minjoon Seo

Previous works in prompt engineering for large language models have introduced different gradient-free probability-based prompt selection methods that aim to choose the optimal prompt among the candidates for a given task but have failed to provide a comprehensive and fair comparison between each other.

Prompt Engineering

Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following

2 code implementations28 Feb 2023 Seonghyeon Ye, Hyeonbin Hwang, Sohee Yang, Hyeongu Yun, Yireun Kim, Minjoon Seo

In this paper, we present our finding that prepending a Task-Agnostic Prefix Prompt (TAPP) to the input improves the instruction-following ability of various Large Language Models (LLMs) during inference.

Instruction Following Zero-shot Generalization

Nonparametric Decoding for Generative Retrieval

1 code implementation5 Oct 2022 Hyunji Lee, Jaeyoung Kim, Hoyeon Chang, Hanseok Oh, Sohee Yang, Vlad Karpukhin, Yi Lu, Minjoon Seo

The generative retrieval model depends solely on the information encoded in its model parameters without external memory, its information capacity is limited and fixed.

Language Modelling Retrieval +1

Knowledge Unlearning for Mitigating Privacy Risks in Language Models

1 code implementation4 Oct 2022 Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, Minjoon Seo

Pretrained Language Models (LMs) memorize a vast amount of knowledge during initial pretraining, including information that may violate the privacy of personal lives and identities.

Ranked #3 on Language Modelling on The Pile (Test perplexity metric)

Language Modelling

TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models

1 code implementation29 Apr 2022 Joel Jang, Seonghyeon Ye, Changho Lee, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Minjoon Seo

Language Models (LMs) become outdated as the world changes; they often fail to perform tasks requiring recent factual information which was absent or different during training, a phenomenon called temporal misalignment.

Continual Learning

Generative Multi-hop Retrieval

1 code implementation27 Apr 2022 Hyunji Lee, Sohee Yang, Hanseok Oh, Minjoon Seo

A common practice for text retrieval is to use an encoder to map the documents and the query to a common vector space and perform a nearest neighbor search (NNS); multi-hop retrieval also often adopts the same paradigm, usually with a modification of iteratively reformulating the query vector so that it can retrieve different documents at each hop.

Retrieval Text Retrieval

Towards Continual Knowledge Learning of Language Models

2 code implementations ICLR 2022 Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Stanley Jungkyu Choi, Minjoon Seo

By highlighting the critical causes of knowledge forgetting, we show that CKL is a challenging and important problem that helps us better understand and train ever-changing LMs.

Continual Learning Fact Checking +2

Designing a Minimal Retrieve-and-Read System for Open-Domain Question Answering

1 code implementation NAACL 2021 Sohee Yang, Minjoon Seo

In open-domain question answering (QA), retrieve-and-read mechanism has the inherent benefit of interpretability and the easiness of adding, removing, or editing knowledge compared to the parametric approaches of closed-book QA models.

Open-Domain Question Answering

Is Retriever Merely an Approximator of Reader?

no code implementations21 Oct 2020 Sohee Yang, Minjoon Seo

The state of the art in open-domain question answering (QA) relies on an efficient retriever that drastically reduces the search space for the expensive reader.

Open-Domain Question Answering

Spatial Dependency Parsing for Semi-Structured Document Information Extraction

1 code implementation Findings (ACL) 2021 Wonseok Hwang, Jinyeong Yim, Seunghyun Park, Sohee Yang, Minjoon Seo

Information Extraction (IE) for semi-structured document images is often approached as a sequence tagging problem by classifying each recognized input token into one of the IOB (Inside, Outside, and Beginning) categories.

Dependency Parsing

Efficient Dialogue State Tracking by Selectively Overwriting Memory

3 code implementations ACL 2020 Sungdong Kim, Sohee Yang, Gyuwan Kim, Sang-Woo Lee

This mechanism consists of two steps: (1) predicting state operation on each of the memory slots, and (2) overwriting the memory with new values, of which only a few are generated according to the predicted state operations.

Dialogue State Tracking Multi-domain Dialogue State Tracking

Large-Scale Answerer in Questioner's Mind for Visual Dialog Question Generation

1 code implementation ICLR 2019 Sang-Woo Lee, Tong Gao, Sohee Yang, Jaejun Yoo, Jung-Woo Ha

Answerer in Questioner's Mind (AQM) is an information-theoretic framework that has been recently proposed for task-oriented dialog systems.

Question Generation Question-Generation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.