1 code implementation • CLASP 2022 • Felix Morger, Stephanie Brandl, Lisa Beinborn, Nora Hollenstein
Relative word importance is a key metric for natural language processing.
no code implementations • NAACL (CMCL) 2021 • Nora Hollenstein, Emmanuele Chersoni, Cassandra L. Jacobs, Yohei Oseki, Laurent Prévot, Enrico Santus
The goal of the task is to predict 5 different token- level eye-tracking metrics of the Zurich Cognitive Language Processing Corpus (ZuCo).
1 code implementation • ACL 2022 • Sidsel Boldsen, Manex Agirrezabal, Nora Hollenstein
Character-level information is included in many NLP models, but evaluating the information encoded in character representations is an open issue.
no code implementations • CMCL (ACL) 2022 • Nora Hollenstein, Emmanuele Chersoni, Cassandra Jacobs, Yohei Oseki, Laurent Prévot, Enrico Santus
We present the second shared task on eye-tracking data prediction of the Cognitive Modeling and Computational Linguistics Workshop (CMCL).
1 code implementation • 29 Feb 2024 • Stephanie Brandl, Oliver Eberle, Tiago Ribeiro, Anders Søgaard, Nora Hollenstein
Rationales in the form of manually annotated input spans usually serve as ground truth when evaluating explainability methods in NLP.
no code implementations • 31 Oct 2023 • Xinting Huang, Jiajing Wan, Ioannis Kritikos, Nora Hollenstein
Humans read texts at a varying pace, while machine learning models treat each token in the same way in terms of a computational process.
1 code implementation • 31 Mar 2023 • Tiago Ribeiro, Stephanie Brandl, Anders Søgaard, Nora Hollenstein
We present WebQAmGaze, a multilingual low-cost eye-tracking-while-reading dataset, designed as the first webcam-based eye-tracking corpus of reading to support the development of explainable computational language processing models.
1 code implementation • 24 Feb 2023 • Charlotte Pouw, Nora Hollenstein, Lisa Beinborn
When humans read a text, their eye movements are influenced by the structural complexity of the input sentences.
no code implementations • 11 Feb 2023 • Varun Khurana, Yaman Kumar Singla, Nora Hollenstein, Rajesh Kumar, Balaji Krishnamurthy
Feedback can be either explicit (e. g. ranking used in training language models) or implicit (e. g. using human cognitive signals in the form of eyetracking).
1 code implementation • 5 Oct 2022 • Stephanie Brandl, Nora Hollenstein
Human fixation patterns have been shown to correlate strongly with Transformer-based attention.
1 code implementation • LREC 2022 • Nora Hollenstein, Maria Barrett, Marina Björnsdóttir
Corpora of eye movements during reading of contextualized running text is a way of making such records available for natural language processing purposes.
1 code implementation • LREC 2022 • Thórhildur Thorleiksdóttir, Cedric Renggli, Nora Hollenstein, Ce Zhang
Collecting human judgements is currently the most reliable evaluation method for natural language generation systems.
no code implementations • 12 Dec 2021 • Nora Hollenstein, Marius Tröndle, Martyna Plomecka, Samuel Kiegeland, Yilmazcan Özyurt, Lena A. Jäger, Nicolas Langer
The Zurich Cognitive Language Processing Corpus (ZuCo) provides eye-tracking and EEG signals from two reading paradigms, normal reading and task-specific reading.
1 code implementation • 30 Aug 2021 • Cedric Renggli, Luka Rimanic, Nora Hollenstein, Ce Zhang
The Bayes error rate (BER) is a fundamental concept in machine learning that quantifies the best possible accuracy any classifier can achieve on a fixed probability distribution.
1 code implementation • ACL 2021 • Nora Hollenstein, Lisa Beinborn
In neural language models, gradient-based saliency methods indicate the relative importance of a token for the target objective.
1 code implementation • NAACL 2021 • Nora Hollenstein, Federico Pirovano, Ce Zhang, Lena Jäger, Lisa Beinborn
We analyze if large language models are able to predict patterns of human reading behavior.
no code implementations • 17 Feb 2021 • Nora Hollenstein, Cedric Renggli, Benjamin Glaus, Maria Barrett, Marius Troendle, Nicolas Langer, Ce Zhang
In this paper, we present the first large-scale study of systematically analyzing the potential of EEG brain activity data for improving natural language processing tasks, with a special focus on which features of the signal are most beneficial.
no code implementations • COLING 2020 • Nora Hollenstein, Adrian van der Lek, Ce Zhang
We demonstrate the functionalities of the new user interface for CogniVal.
no code implementations • 9 Jun 2020 • Lukas Muttenthaler, Nora Hollenstein, Maria Barrett
Cognitively inspired NLP leverages human-derived data to teach machines about language processing mechanisms.
no code implementations • LREC 2020 • Nora Hollenstein, Maria Barrett, Lisa Beinborn
NLP models are imperfect and lack intricate capabilities that humans access automatically when processing speech or reading a text.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Giuseppe Russo, Nora Hollenstein, Claudiu Musat, Ce Zhang
We introduce CGA, a conditional VAE architecture, to control, generate, and augment text.
no code implementations • LREC 2020 • Nora Hollenstein, Marius Troendle, Ce Zhang, Nicolas Langer
We recorded and preprocessed ZuCo 2. 0, a new dataset of simultaneous eye-tracking and electroencephalography during natural reading and during annotation.
1 code implementation • CONLL 2019 • Nora Hollenstein, Antonio de la Torre, Nicolas Langer, Ce Zhang
An interesting method of evaluating word representations is by how much they reflect the semantic representations in the human brain.
3 code implementations • 4 Apr 2019 • Nora Hollenstein, Maria Barrett, Marius Troendle, Francesco Bigiolli, Nicolas Langer, Ce Zhang
Cognitive language processing data such as eye-tracking features have shown improvements on single NLP tasks.
1 code implementation • NAACL 2019 • Nora Hollenstein, Ce Zhang
Previous research shows that eye-tracking data contains information about the lexical and syntactic properties of text, which can be used to improve natural language processing models.
1 code implementation • CONLL 2018 • Maria Barrett, Joachim Bingel, Nora Hollenstein, Marek Rei, Anders S{\o}gaard
Learning attention functions requires large volumes of data, but many NLP tasks simulate human behavior, and in this paper, we show that human attention really does provide a good inductive bias on many attention functions in NLP.
no code implementations • WS 2018 • Ivan Girardi, Pengfei Ji, An-phi Nguyen, Nora Hollenstein, Adam Ivankay, Lorenz Kuhn, Chiara Marchiori, Ce Zhang
In addition, a method to detect warning symptoms is implemented to render the classification task transparent from a medical perspective.
no code implementations • SEMEVAL 2018 • Jonathan Rotsztejn, Nora Hollenstein, Ce Zhang
Reliably detecting relevant relations between entities in unstructured text is a valuable resource for knowledge extraction, which is why it has awaken significant interest in the field of Natural Language Processing.
1 code implementation • LREC 2016 • Nora Hollenstein, Nathan Schneider, Bonnie Webber
Automatically finding these inconsistencies and correcting them (even manually) can increase the quality of the data.