1 code implementation • 2 Oct 2023 • Ori Yoran, Tomer Wolfson, Ori Ram, Jonathan Berant
An important desideratum of RALMs, is that retrieved information helps model performance when it is relevant, and does not harm performance when it is not.
Ranked #3 on Question Answering on Bamboogle
2 code implementations • 13 Jul 2023 • Dor Muhlgay, Ori Ram, Inbal Magar, Yoav Levine, Nir Ratner, Yonatan Belinkov, Omri Abend, Kevin Leyton-Brown, Amnon Shashua, Yoav Shoham
FACTOR automatically transforms a factual corpus of interest into a benchmark evaluating an LM's propensity to generate true facts from the corpus vs. similar but incorrect statements.
1 code implementation • 31 Jan 2023 • Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham
Retrieval-Augmented Language Modeling (RALM) methods, which condition a language model (LM) on relevant documents from a grounding corpus during generation, were shown to significantly improve language modeling performance.
1 code implementation • 21 Dec 2022 • Nir Ratner, Yoav Levine, Yonatan Belinkov, Ori Ram, Inbal Magar, Omri Abend, Ehud Karpas, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham
We present Parallel Context Windows (PCW), a method that alleviates the context window restriction for any off-the-shelf LLM without further training.
1 code implementation • 20 Dec 2022 • Ori Ram, Liat Bezalel, Adi Zicher, Yonatan Belinkov, Jonathan Berant, Amir Globerson
We leverage this insight and propose a simple way to enrich query and passage representations with lexical information at inference time, and show that this significantly improves performance compared to the original model in zero-shot settings, and specifically on the BEIR benchmark.
no code implementations • 21 Apr 2022 • Yoav Levine, Itay Dalmedigos, Ori Ram, Yoel Zeldes, Daniel Jannai, Dor Muhlgay, Yoni Osin, Opher Lieber, Barak Lenz, Shai Shalev-Shwartz, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham
To demonstrate this, we introduce three novel methods for leveraging frozen models: input-dependent prompt tuning, frozen readers, and recursive LMs, each of which vastly improves on current frozen-model approaches.
1 code implementation • 30 Mar 2022 • Adi Haviv, Ori Ram, Ofir Press, Peter Izsak, Omer Levy
Causal transformer language models (LMs), such as GPT-3, typically require some form of positional encoding, such as positional embeddings.
1 code implementation • NAACL 2022 • Ori Ram, Gal Shachaf, Omer Levy, Jonathan Berant, Amir Globerson
Dense retrievers for open-domain question answering (ODQA) have been shown to achieve impressive performance by training on large datasets of question-passage pairs.
1 code implementation • 12 Aug 2021 • Or Castel, Ori Ram, Avia Efrat, Omer Levy
However, this approach does not ensure that the answer is a span in the given passage, nor does it guarantee that it is the most probable one.
4 code implementations • ACL 2021 • Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy
Given a passage with multiple sets of recurring spans, we mask in each set all recurring spans but one, and ask the model to select the correct span in the passage for each masked span.
1 code implementation • ACL 2021 • Yuval Kirstain, Ori Ram, Omer Levy
The introduction of pretrained language models has reduced many complex task-specific NLP models to simple lightweight layers.
Ranked #5 on Coreference Resolution on CoNLL 2012
no code implementations • ACL 2020 • Yoav Levine, Barak Lenz, Or Dagan, Ori Ram, Dan Padnos, Or Sharir, Shai Shalev-Shwartz, Amnon Shashua, Yoav Shoham
The ability to learn from large unlabeled corpora has allowed neural language models to advance the frontier in natural language understanding.
Ranked #11 on Word Sense Disambiguation on Words in Context
2 code implementations • NAACL 2019 • Tal Schuster, Ori Ram, Regina Barzilay, Amir Globerson
We introduce a novel method for multilingual transfer that utilizes deep contextual embeddings, pretrained in an unsupervised fashion.
Cross-lingual zero-shot dependency parsing Few-Shot Learning +1