Search Results for author: Nora Hollenstein

Found 31 papers, 16 papers with code

CMCL 2021 Shared Task on Eye-Tracking Prediction

no code implementations NAACL (CMCL) 2021 Nora Hollenstein, Emmanuele Chersoni, Cassandra L. Jacobs, Yohei Oseki, Laurent Prévot, Enrico Santus

The goal of the task is to predict 5 different token- level eye-tracking metrics of the Zurich Cognitive Language Processing Corpus (ZuCo).

Interpreting Character Embeddings With Perceptual Representations: The Case of Shape, Sound, and Color

1 code implementation ACL 2022 Sidsel Boldsen, Manex Agirrezabal, Nora Hollenstein

Character-level information is included in many NLP models, but evaluating the information encoded in character representations is an open issue.

Longer Fixations, More Computation: Gaze-Guided Recurrent Neural Networks

no code implementations31 Oct 2023 Xinting Huang, Jiajing Wan, Ioannis Kritikos, Nora Hollenstein

Humans read texts at a varying pace, while machine learning models treat each token in the same way in terms of a computational process.

Language Modelling Sentiment Analysis

WebQAmGaze: A Multilingual Webcam Eye-Tracking-While-Reading Dataset

1 code implementation31 Mar 2023 Tiago Ribeiro, Stephanie Brandl, Anders Søgaard, Nora Hollenstein

We create WebQAmGaze, a multilingual low-cost eye-tracking-while-reading dataset, designed to support the development of fair and transparent NLP models.

Question Answering

Cross-Lingual Transfer of Cognitive Processing Complexity

1 code implementation24 Feb 2023 Charlotte Pouw, Nora Hollenstein, Lisa Beinborn

When humans read a text, their eye movements are influenced by the structural complexity of the input sentences.

Cross-Lingual Transfer Sentence

Synthesizing Human Gaze Feedback for Improved NLP Performance

no code implementations11 Feb 2023 Varun Khurana, Yaman Kumar Singla, Nora Hollenstein, Rajesh Kumar, Balaji Krishnamurthy

Feedback can be either explicit (e. g. ranking used in training language models) or implicit (e. g. using human cognitive signals in the form of eyetracking).

Every word counts: A multilingual analysis of individual human alignment with model attention

1 code implementation5 Oct 2022 Stephanie Brandl, Nora Hollenstein

Human fixation patterns have been shown to correlate strongly with Transformer-based attention.

The Copenhagen Corpus of Eye Tracking Recordings from Natural Reading of Danish Texts

1 code implementation LREC 2022 Nora Hollenstein, Maria Barrett, Marina Björnsdóttir

Corpora of eye movements during reading of contextualized running text is a way of making such records available for natural language processing purposes.

Dynamic Human Evaluation for Relative Model Comparisons

1 code implementation LREC 2022 Thórhildur Thorleiksdóttir, Cedric Renggli, Nora Hollenstein, Ce Zhang

Collecting human judgements is currently the most reliable evaluation method for natural language generation systems.

Text Generation

Reading Task Classification Using EEG and Eye-Tracking Data

no code implementations12 Dec 2021 Nora Hollenstein, Marius Tröndle, Martyna Plomecka, Samuel Kiegeland, Yilmazcan Özyurt, Lena A. Jäger, Nicolas Langer

The Zurich Cognitive Language Processing Corpus (ZuCo) provides eye-tracking and EEG signals from two reading paradigms, normal reading and task-specific reading.

Classification EEG +2

Evaluating Bayes Error Estimators on Real-World Datasets with FeeBee

1 code implementation30 Aug 2021 Cedric Renggli, Luka Rimanic, Nora Hollenstein, Ce Zhang

The Bayes error rate (BER) is a fundamental concept in machine learning that quantifies the best possible accuracy any classifier can achieve on a fixed probability distribution.

Relative Importance in Sentence Processing

1 code implementation ACL 2021 Nora Hollenstein, Lisa Beinborn

In neural language models, gradient-based saliency methods indicate the relative importance of a token for the target objective.

Natural Language Understanding Sentence

Decoding EEG Brain Activity for Multi-Modal Natural Language Processing

no code implementations17 Feb 2021 Nora Hollenstein, Cedric Renggli, Benjamin Glaus, Maria Barrett, Marius Troendle, Nicolas Langer, Ce Zhang

In this paper, we present the first large-scale study of systematically analyzing the potential of EEG brain activity data for improving natural language processing tasks, with a special focus on which features of the signal are most beneficial.

BIG-bench Machine Learning EEG +3

Human brain activity for machine attention

no code implementations9 Jun 2020 Lukas Muttenthaler, Nora Hollenstein, Maria Barrett

Cognitively inspired NLP leverages human-derived data to teach machines about language processing mechanisms.

Dimensionality Reduction EEG +2

ZuCo 2.0: A Dataset of Physiological Recordings During Natural Reading and Annotation

no code implementations LREC 2020 Nora Hollenstein, Marius Troendle, Ce Zhang, Nicolas Langer

We recorded and preprocessed ZuCo 2. 0, a new dataset of simultaneous eye-tracking and electroencephalography during natural reading and during annotation.

CogniVal: A Framework for Cognitive Word Embedding Evaluation

1 code implementation CONLL 2019 Nora Hollenstein, Antonio de la Torre, Nicolas Langer, Ce Zhang

An interesting method of evaluating word representations is by how much they reflect the semantic representations in the human brain.

EEG Electroencephalogram (EEG) +1

Entity Recognition at First Sight: Improving NER with Eye Movement Information

1 code implementation NAACL 2019 Nora Hollenstein, Ce Zhang

Previous research shows that eye-tracking data contains information about the lexical and syntactic properties of text, which can be used to improve natural language processing models.

named-entity-recognition Named Entity Recognition +2

Sequence Classification with Human Attention

1 code implementation CONLL 2018 Maria Barrett, Joachim Bingel, Nora Hollenstein, Marek Rei, Anders S{\o}gaard

Learning attention functions requires large volumes of data, but many NLP tasks simulate human behavior, and in this paper, we show that human attention really does provide a good inductive bias on many attention functions in NLP.

Abusive Language Classification +4

ETH-DS3Lab at SemEval-2018 Task 7: Effectively Combining Recurrent and Convolutional Neural Networks for Relation Classification and Extraction

no code implementations SEMEVAL 2018 Jonathan Rotsztejn, Nora Hollenstein, Ce Zhang

Reliably detecting relevant relations between entities in unstructured text is a valuable resource for knowledge extraction, which is why it has awaken significant interest in the field of Natural Language Processing.

General Classification Relation Classification

Inconsistency Detection in Semantic Annotation

1 code implementation LREC 2016 Nora Hollenstein, Nathan Schneider, Bonnie Webber

Automatically finding these inconsistencies and correcting them (even manually) can increase the quality of the data.

Cannot find the paper you are looking for? You can Submit a new open access paper.