1 code implementation • • Natalie Shapira, Dana Atzil-Slonim, Daniel Juravski, Moran Baruch, Dana Stolowicz-Melman, Adar Paz, Tal Alfi-Yogev, Roy Azoulay, Adi Singer, Maayan Revivo, Chen Dahbash, Limor Dayan, Tamar Naim, Lidar Gez, Boaz Yanai, Adva Maman, Adam Nadaf, Elinor Sarfati, Amna Baloum, Tal Naor, Ephraim Mosenkis, Badreya Sarsour, Jany Gelfand Morgenshteyn, Yarden Elias, Liat Braun, Moria Rubin, Matan Kenigsbuch, Noa Bergwerk, Noam Yosef, Sivan Peled, Coral Avigdor, Rahav Obercyger, Rachel Mann, Tomer Alper, Inbal Beka, Ori Shapira, Yoav Goldberg
We introduce a large set of Hebrew lexicons pertaining to psychological aspects.
Interactive summarization is a task that facilitates user-guided exploration of information within a document set.
Current approaches for text summarization are predominantly automatic, with rather limited space for human intervention and control over the process.
Text clustering methods were traditionally incorporated into multi-document summarization (MDS) as a means for coping with considerable information repetition.
Keyphrase extraction has been extensively researched within the single-document setting, with an abundance of methods, datasets and applications.
We introduce iFacetSum, a web application for exploring topical document sets.
In this paper, we develop an end-to-end evaluation framework for interactive summarization, focusing on expansion-based interaction, which considers the accumulating information along a user session.
Allowing users to interact with multi-document summarizers is a promising direction towards improving and customizing summary results.
Aligning sentences in a reference summary with their counterparts in source documents was shown as a useful auxiliary summarization task, notably for generating training data for salience detection.
Product reviews summarization is a type of Multi-Document Summarization (MDS) task in which the summarized document sets are often far larger than in traditional MDS (up to tens of thousands of reviews).
Human evaluation experiments show that, compared to the state-of-the-art supervised-learning systems and ROUGE-as-rewards RL summarisation systems, the RL systems using our learned rewards during training generate summarieswith higher human ratings.
We show that plain ROUGE F1 scores are not ideal for comparing current neural systems which on average produce different lengths.
Conducting a manual evaluation is considered an essential part of summary evaluation methodology.
We present a novel interactive summarization system that is based on abstractive summarization, derived from a recent consolidated knowledge representation for multiple texts.
We propose to move from Open Information Extraction (OIE) ahead to Open Knowledge Representation (OKR), aiming to represent information conveyed jointly in a set of texts in an open text-based manner.