Search Results for author: Ori Ram

Found 13 papers, 11 papers with code

Making Retrieval-Augmented Language Models Robust to Irrelevant Context

1 code implementation2 Oct 2023 Ori Yoran, Tomer Wolfson, Ori Ram, Jonathan Berant

An important desideratum of RALMs, is that retrieved information helps model performance when it is relevant, and does not harm performance when it is not.

Language Modelling Natural Language Inference +2

Generating Benchmarks for Factuality Evaluation of Language Models

2 code implementations13 Jul 2023 Dor Muhlgay, Ori Ram, Inbal Magar, Yoav Levine, Nir Ratner, Yonatan Belinkov, Omri Abend, Kevin Leyton-Brown, Amnon Shashua, Yoav Shoham

FACTOR automatically transforms a factual corpus of interest into a benchmark evaluating an LM's propensity to generate true facts from the corpus vs. similar but incorrect statements.

Language Modelling Retrieval

In-Context Retrieval-Augmented Language Models

1 code implementation31 Jan 2023 Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham

Retrieval-Augmented Language Modeling (RALM) methods, which condition a language model (LM) on relevant documents from a grounding corpus during generation, were shown to significantly improve language modeling performance.

Language Modelling Retrieval +1

Parallel Context Windows for Large Language Models

1 code implementation21 Dec 2022 Nir Ratner, Yoav Levine, Yonatan Belinkov, Ori Ram, Inbal Magar, Omri Abend, Ehud Karpas, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham

We present Parallel Context Windows (PCW), a method that alleviates the context window restriction for any off-the-shelf LLM without further training.

In-Context Learning Playing the Game of 2048 +2

What Are You Token About? Dense Retrieval as Distributions Over the Vocabulary

1 code implementation20 Dec 2022 Ori Ram, Liat Bezalel, Adi Zicher, Yonatan Belinkov, Jonathan Berant, Amir Globerson

We leverage this insight and propose a simple way to enrich query and passage representations with lexical information at inference time, and show that this significantly improves performance compared to the original model in zero-shot settings, and specifically on the BEIR benchmark.

Retrieval

Standing on the Shoulders of Giant Frozen Language Models

no code implementations21 Apr 2022 Yoav Levine, Itay Dalmedigos, Ori Ram, Yoel Zeldes, Daniel Jannai, Dor Muhlgay, Yoni Osin, Opher Lieber, Barak Lenz, Shai Shalev-Shwartz, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham

To demonstrate this, we introduce three novel methods for leveraging frozen models: input-dependent prompt tuning, frozen readers, and recursive LMs, each of which vastly improves on current frozen-model approaches.

Transformer Language Models without Positional Encodings Still Learn Positional Information

1 code implementation30 Mar 2022 Adi Haviv, Ori Ram, Ofir Press, Peter Izsak, Omer Levy

Causal transformer language models (LMs), such as GPT-3, typically require some form of positional encoding, such as positional embeddings.

Position

Learning to Retrieve Passages without Supervision

1 code implementation NAACL 2022 Ori Ram, Gal Shachaf, Omer Levy, Jonathan Berant, Amir Globerson

Dense retrievers for open-domain question answering (ODQA) have been shown to achieve impressive performance by training on large datasets of question-passage pairs.

Contrastive Learning Open-Domain Question Answering +1

How Optimal is Greedy Decoding for Extractive Question Answering?

1 code implementation12 Aug 2021 Or Castel, Ori Ram, Avia Efrat, Omer Levy

However, this approach does not ensure that the answer is a span in the given passage, nor does it guarantee that it is the most probable one.

Extractive Question-Answering Question Answering +1

Few-Shot Question Answering by Pretraining Span Selection

4 code implementations ACL 2021 Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy

Given a passage with multiple sets of recurring spans, we mask in each set all recurring spans but one, and ask the model to select the correct span in the passage for each masked span.

Question Answering

Coreference Resolution without Span Representations

1 code implementation ACL 2021 Yuval Kirstain, Ori Ram, Omer Levy

The introduction of pretrained language models has reduced many complex task-specific NLP models to simple lightweight layers.

coreference-resolution

Cannot find the paper you are looking for? You can Submit a new open access paper.