no code implementations • EMNLP 2021 • Or Honovich, Leshem Choshen, Roee Aharoni, Ella Neeman, Idan Szpektor, Omri Abend
Neural knowledge-grounded generative models for dialogue often produce content that is factually inconsistent with the knowledge they rely on, making them unreliable and limiting their applicability.
Abstractive Text Summarization Natural Language Inference +3
no code implementations • 1 Feb 2024 • Alon Jacovi, Yonatan Bitton, Bernd Bohnet, Jonathan Herzig, Or Honovich, Michael Tseng, Michael Collins, Roee Aharoni, Mor Geva
REVEAL includes comprehensive labels for the relevance, attribution to evidence passages, and logical correctness of each reasoning step in a language model's answer, across a variety of datasets and state-of-the-art language models.
no code implementations • 12 May 2023 • Gal Yona, Or Honovich, Itay Laish, Roee Aharoni
We use CID to highlight context-specific biases that are hard to detect with standard decoding strategies and quantify the effect of different input perturbations.
3 code implementations • 19 Dec 2022 • Or Honovich, Thomas Scialom, Omer Levy, Timo Schick
We collect 64, 000 examples by prompting a language model with three seed examples of instructions and eliciting a fourth.
1 code implementation • 10 Nov 2022 • Ella Neeman, Roee Aharoni, Or Honovich, Leshem Choshen, Idan Szpektor, Omri Abend
Question answering models commonly have access to two sources of "knowledge" during inference time: (1) parametric knowledge - the factual knowledge encoded in the model weights, and (2) contextual knowledge - external knowledge (e. g., a Wikipedia passage) given to the model to generate a grounded answer.
1 code implementation • 3 Nov 2022 • Avia Efrat, Or Honovich, Omer Levy
As the performance of large language models rapidly improves, benchmarks are getting larger and more complex as well.
1 code implementation • 22 May 2022 • Or Honovich, Uri Shaham, Samuel R. Bowman, Omer Levy
Large language models are able to perform a task by conditioning on a few input-output demonstrations - a paradigm known as in-context learning.
1 code implementation • NAACL 2022 • Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, Yossi Matias
Grounded text generation systems often generate text that contains factual inconsistencies, hindering their real-world applicability.
1 code implementation • 16 Apr 2021 • Or Honovich, Leshem Choshen, Roee Aharoni, Ella Neeman, Idan Szpektor, Omri Abend
Neural knowledge-grounded generative models for dialogue often produce content that is factually inconsistent with the knowledge they rely on, making them unreliable and limiting their applicability.
1 code implementation • ACL 2020 • Or Honovich, Lucas Torroba Hennigen, Omri Abend, Shay B. Cohen
Machine reading is an ambitious goal in NLP that subsumes a wide range of text understanding capabilities.