1 code implementation • 17 Dec 2024 • Alon Eirew, Eviatar Nachshoni, Aviv Slobodkin, Ido Dagan
Event relation detection is a fundamental NLP task, leveraged in many downstream applications, whose modeling requires datasets annotated with event relations of various types.
no code implementations • 29 Oct 2024 • Royi Rassin, Aviv Slobodkin, Shauli Ravfogel, Yanai Elazar, Yoav Goldberg
GRADE leverages the world knowledge embedded in large language models and visual question-answering systems to identify relevant concept-specific axes of diversity (e. g., ``shape'' and ``color'' for the concept ``cookie'').
no code implementations • 8 Aug 2024 • Paul Roit, Aviv Slobodkin, Eran Hirsch, Arie Cattan, Ayal Klein, Valentina Pyatkin, Ido Dagan
Detecting semantic arguments of a predicate word has been conventionally modeled as a sentence-level task.
no code implementations • 28 Jul 2024 • Nitzan Bitton-Guetta, Aviv Slobodkin, Aviya Maimon, Eliya Habba, Royi Rassin, Yonatan Bitton, Idan Szpektor, Amir Globerson, Yuval Elovici
To study these skills, we present Visual Riddles, a benchmark aimed to test vision and language models on visual riddles requiring commonsense and world knowledge.
no code implementations • 29 Jun 2024 • Omer Goldman, Alon Jacovi, Aviv Slobodkin, Aviya Maimon, Ido Dagan, Reut Tsarfaty
By using a descriptive vocabulary and discussing the relevant properties of difficulty in long-context, we can implement more informed research in this area.
1 code implementation • 2 Jun 2024 • Ori Ernst, Ori Shapira, Aviv Slobodkin, Sharon Adar, Mohit Bansal, Jacob Goldberger, Ran Levy, Ido Dagan
Multi-document summarization (MDS) is a challenging task, often decomposed to subtasks of salience and redundancy detection, followed by text generation.
1 code implementation • 25 Mar 2024 • Aviv Slobodkin, Eran Hirsch, Arie Cattan, Tal Schuster, Ido Dagan
Recent efforts to address hallucinations in Large Language Models (LLMs) have focused on attributed text generation, which supplements generated texts with citations of supporting sources for post-generation fact-checking and corrections.
no code implementations • 22 Mar 2024 • Aviv Slobodkin, Ori Shapira, Ran Levy, Ido Dagan
This study lays the groundwork for further exploration of modular text generation in the multi-document setting, offering potential improvements in the quality and reliability of generated content.
1 code implementation • 18 Oct 2023 • Aviv Slobodkin, Omer Goldman, Avi Caciularu, Ido Dagan, Shauli Ravfogel
In this paper, we explore the behavior of LLMs when presented with (un)answerable queries.
1 code implementation • 13 Oct 2023 • Aviv Slobodkin, Avi Caciularu, Eran Hirsch, Ido Dagan
Further, we substantially improve the silver training data quality via GPT-4 distillation.
no code implementations • 16 Aug 2023 • Aviv Slobodkin, Niv Nachum, Shmuel Amar, Ori Shapira, Ido Dagan
Current approaches for text summarization are predominantly automatic, with rather limited space for human intervention and control over the process.
2 code implementations • 24 Oct 2022 • Aviv Slobodkin, Paul Roit, Eran Hirsch, Ori Ernst, Ido Dagan
Producing a reduced version of a source text, as in generic or focused summarization, inherently involves two distinct subtasks: deciding on targeted content and generating a coherent text conveying it.
no code implementations • *SEM (NAACL) 2022 • Aviv Slobodkin, Leshem Choshen, Omri Abend
We further show an additional gain when using both semantic and syntactic structures in some language pairs.
1 code implementation • NAACL 2021 • Aviv Slobodkin, Leshem Choshen, Omri Abend
Probing neural models for the ability to perform downstream tasks using their activation patterns is often used to localize what parts of the network specialize in performing what tasks.