Search Results for author: Neslihan Iskender

Found 6 papers, 2 papers with code

Best Practices for Crowd-based Evaluation of German Summarization: Comparing Crowd, Expert and Automatic Evaluation

no code implementations EMNLP (Eval4NLP) 2020 Neslihan Iskender, Tim Polzehl, Sebastian Möller

On the one hand, the human assessment of summarization quality conducted by linguistic experts is slow, expensive, and still not a standardized procedure.

Informativeness

Reliability of Human Evaluation for Text Summarization: Lessons Learned and Challenges Ahead

1 code implementation EACL (HumEval) 2021 Neslihan Iskender, Tim Polzehl, Sebastian Möller

Based on our empirical analysis, we provide guidelines to ensure the reliability of expert and non-expert evaluations, and we determine the factors that might affect the reliability of the human evaluation.

Text Summarization

Towards Hybrid Human-Machine Workflow for Natural Language Generation

no code implementations EACL (HCINLP) 2021 Neslihan Iskender, Tim Polzehl, Sebastian Möller

In recent years, crowdsourcing has gained much attention from researchers to generate data for the Natural Language Generation (NLG) tools or to evaluate them.

Text Generation

Does Summary Evaluation Survive Translation to Other Languages?

1 code implementation NAACL 2022 Spencer Braun, Oleg Vasilyev, Neslihan Iskender, John Bohannon

The creation of a quality summarization dataset is an expensive, time-consuming effort, requiring the production and evaluation of summaries by both trained humans and machines.

Machine Translation Translation

Towards Human-Free Automatic Quality Evaluation of German Summarization

no code implementations13 May 2021 Neslihan Iskender, Oleg Vasilyev, Tim Polzehl, John Bohannon, Sebastian Möller

Evaluating large summarization corpora using humans has proven to be expensive from both the organizational and the financial perspective.

Informativeness Language Modelling

Towards a Reliable and Robust Methodology for Crowd-Based Subjective Quality Assessment of Query-Based Extractive Text Summarization

no code implementations LREC 2020 Neslihan Iskender, Tim Polzehl, Sebastian M{\"o}ller

The intrinsic and extrinsic quality evaluation is an essential part of the summary evaluation methodology usually conducted in a traditional controlled laboratory environment.

Extractive Text Summarization Informativeness

Cannot find the paper you are looking for? You can Submit a new open access paper.