no code implementations • EMNLP 2021 • Or Honovich, Leshem Choshen, Roee Aharoni, Ella Neeman, Idan Szpektor, Omri Abend
Neural knowledge-grounded generative models for dialogue often produce content that is factually inconsistent with the knowledge they rely on, making them unreliable and limiting their applicability.
Abstractive Text Summarization
Natural Language Inference
+2
1 code implementation • 22 May 2022 • Or Honovich, Uri Shaham, Samuel R. Bowman, Omer Levy
Large language models are able to perform a task by conditioning on a few input-output demonstrations - a paradigm known as in-context learning.
1 code implementation • dialdoc (ACL) 2022 • Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, Yossi Matias
Grounded text generation systems often generate text that contains factual inconsistencies, hindering their real-world applicability.
1 code implementation • 16 Apr 2021 • Or Honovich, Leshem Choshen, Roee Aharoni, Ella Neeman, Idan Szpektor, Omri Abend
Neural knowledge-grounded generative models for dialogue often produce content that is factually inconsistent with the knowledge they rely on, making them unreliable and limiting their applicability.
1 code implementation • ACL 2020 • Or Honovich, Lucas Torroba Hennigen, Omri Abend, Shay B. Cohen
Machine reading is an ambitious goal in NLP that subsumes a wide range of text understanding capabilities.