1 code implementation • EMNLP (ACL) 2021 • Eran Hirsch, Alon Eirew, Ori Shapira, Avi Caciularu, Arie Cattan, Ori Ernst, Ramakanth Pasunuru, Hadar Ronen, Mohit Bansal, Ido Dagan
We introduce iFacetSum, a web application for exploring topical document sets.
1 code implementation • 23 May 2022 • Ayal Klein, Eran Hirsch, Ron Eliav, Valentina Pyatkin, Avi Caciularu, Ido Dagan
Several recent works have suggested to represent semantic relations with questions and answers, decomposing textual information into separate interrogative natural language statements.
2 code implementations • 24 Oct 2022 • Aviv Slobodkin, Paul Roit, Eran Hirsch, Ori Ernst, Ido Dagan
Producing a reduced version of a source text, as in generic or focused summarization, inherently involves two distinct subtasks: deciding on targeted content and generating a coherent text conveying it.
1 code implementation • 24 May 2023 • Eran Hirsch, Valentina Pyatkin, Ruben Wolhandler, Avi Caciularu, Asi Shefer, Ido Dagan
In this paper, we suggest revisiting the sentence union generation task as an effective well-defined testbed for assessing text consolidation capabilities, decoupling the consolidation challenge from subjective content selection.
1 code implementation • NeurIPS 2023 • Royi Rassin, Eran Hirsch, Daniel Glickman, Shauli Ravfogel, Yoav Goldberg, Gal Chechik
This reflects an impaired mapping between linguistic binding of entities and modifiers in the prompt and visual binding of the corresponding elements in the generated image.
1 code implementation • 13 Oct 2023 • Aviv Slobodkin, Avi Caciularu, Eran Hirsch, Ido Dagan
Further, we substantially improve the silver training data quality via GPT-4 distillation.
no code implementations • 18 Feb 2024 • Eran Hirsch, Guy Uziel, Ateret Anaby-Tavor
Planning is a fundamental task in artificial intelligence that involves finding a sequence of actions that achieve a specified goal in a given environment.
no code implementations • 25 Mar 2024 • Aviv Slobodkin, Eran Hirsch, Arie Cattan, Tal Schuster, Ido Dagan
Recent efforts to address hallucinations in Large Language Models (LLMs) have focused on attributed text generation, which supplements generated texts with citations of supporting sources for post-generation fact-checking and corrections.