Search Results for author: Mostafa Abdou

Found 23 papers, 7 papers with code

Word Order Does Matter and Shuffled Language Models Know It

no code implementations ACL 2022 Mostafa Abdou, Vinit Ravishankar, Artur Kulmizev, Anders Søgaard

Recent studies have shown that language models pretrained and/or fine-tuned on randomly permuted sentences exhibit competitive performance on GLUE, putting into question the importance of word order information.

Position Segmentation +1

Mapping Brains with Language Models: A Survey

no code implementations8 Jun 2023 Antonia Karamolegkou, Mostafa Abdou, Anders Søgaard

Over the years, many researchers have seemingly made the same observation: Brain and language model activations exhibit some structural similarities, enabling linear partial mappings between features extracted from neural recordings and computational language models.

Language Modelling

Structural Similarities Between Language Models and Neural Response Measurements

1 code implementation2 Jun 2023 Jiaang Li, Antonia Karamolegkou, Yova Kementchedjhieva, Mostafa Abdou, Sune Lehmann, Anders Søgaard

Human language processing is also opaque, but neural response measurements can provide (noisy) recordings of activation during listening or reading, from which we can extract similar representations of words and phrases.

Brain Decoding

Word Order Does Matter (And Shuffled Language Models Know It)

no code implementations21 Mar 2022 Vinit Ravishankar, Mostafa Abdou, Artur Kulmizev, Anders Søgaard

Recent studies have shown that language models pretrained and/or fine-tuned on randomly permuted sentences exhibit competitive performance on GLUE, putting into question the importance of word order information.

Position Segmentation +1

Connecting Neural Response measurements & Computational Models of language: a non-comprehensive guide

no code implementations10 Mar 2022 Mostafa Abdou

Recent advances in language modelling and in neuroimaging methodology promise potential improvements in both the investigation of language's neurobiology and in the building of better and more human-like language models.

Language Modelling

Do We Still Need Automatic Speech Recognition for Spoken Language Understanding?

no code implementations29 Nov 2021 Lasse Borgholt, Jakob Drachmann Havtorn, Mostafa Abdou, Joakim Edin, Lars Maaløe, Anders Søgaard, Christian Igel

We compare learned speech features from wav2vec 2. 0, state-of-the-art ASR transcripts, and the ground truth text as input for a novel speech-based named entity recognition task, a cardiac arrest detection task on real-world emergency calls and two existing SLU benchmarks.

Ranked #7 on Spoken Language Understanding on Fluent Speech Commands (using extra training data)

Automatic Speech Recognition Automatic Speech Recognition (ASR) +8

Do Language Models Know the Way to Rome?

no code implementations EMNLP (BlackboxNLP) 2021 Bastien Liétard, Mostafa Abdou, Anders Søgaard

The global geometry of language models is important for a range of applications, but language model probes tend to evaluate rather local relations, for which ground truths are easily obtained.

Language Modelling

Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color

no code implementations CoNLL (EMNLP) 2021 Mostafa Abdou, Artur Kulmizev, Daniel Hershcovich, Stella Frank, Ellie Pavlick, Anders Søgaard

Pretrained language models have been shown to encode relational information, such as the relations between entities or concepts in knowledge-bases -- (Paris, Capital, France).

Does injecting linguistic structure into language models lead to better alignment with brain recordings?

no code implementations29 Jan 2021 Mostafa Abdou, Ana Valeria Gonzalez, Mariya Toneva, Daniel Hershcovich, Anders Søgaard

We evaluate across two fMRI datasets whether language models align better with brain recordings, if their attention is biased by annotations from syntactic or semantic formalisms.

Attention Can Reflect Syntactic Structure (If You Let It)

no code implementations EACL 2021 Vinit Ravishankar, Artur Kulmizev, Mostafa Abdou, Anders Søgaard, Joakim Nivre

Since the popularization of the Transformer as a general-purpose feature encoder for NLP, many studies have attempted to decode linguistic structure from its novel multi-head attention mechanism.

Joint Semantic Analysis with Document-Level Cross-Task Coherence Rewards

1 code implementation12 Oct 2020 Rahul Aralikatte, Mostafa Abdou, Heather Lent, Daniel Hershcovich, Anders Søgaard

Coreference resolution and semantic role labeling are NLP tasks that capture different aspects of semantics, indicating respectively, which expressions refer to the same entity, and what semantic roles expressions serve in the sentence.

coreference-resolution Natural Language Understanding +2

The Sensitivity of Language Models and Humans to Winograd Schema Perturbations

2 code implementations ACL 2020 Mostafa Abdou, Vinit Ravishankar, Maria Barrett, Yonatan Belinkov, Desmond Elliott, Anders Søgaard

Large-scale pretrained language models are the major driving force behind recent improvements in performance on the Winograd Schema Challenge, a widely employed test of common sense reasoning ability.

Common Sense Reasoning

Do Neural Language Models Show Preferences for Syntactic Formalisms?

no code implementations ACL 2020 Artur Kulmizev, Vinit Ravishankar, Mostafa Abdou, Joakim Nivre

Recent work on the interpretability of deep neural language models has concluded that many properties of natural language syntax are encoded in their representational spaces.

Compositional Generalization in Image Captioning

1 code implementation CONLL 2019 Mitja Nikolaus, Mostafa Abdou, Matthew Lamm, Rahul Aralikatte, Desmond Elliott

Image captioning models are usually evaluated on their ability to describe a held-out set of images, not on their ability to generalize to unseen concepts.

Image Captioning Sentence

Higher-order Comparisons of Sentence Encoder Representations

no code implementations IJCNLP 2019 Mostafa Abdou, Artur Kulmizev, Felix Hill, Daniel M. Low, Anders Søgaard

Representational Similarity Analysis (RSA) is a technique developed by neuroscientists for comparing activity patterns of different measurement modalities (e. g., fMRI, electrophysiology, behavior).

Sentence

X-WikiRE: A Large, Multilingual Resource for Relation Extraction as Machine Comprehension

1 code implementation WS 2019 Mostafa Abdou, Cezar Sas, Rahul Aralikatte, Isabelle Augenstein, Anders Søgaard

Although the vast majority of knowledge bases KBs are heavily biased towards English, Wikipedias do cover very different topics in different languages.

Reading Comprehension Relation +1

Better, Faster, Stronger Sequence Tagging Constituent Parsers

2 code implementations NAACL 2019 David Vilares, Mostafa Abdou, Anders Søgaard

Combining these techniques, we clearly surpass the performance of sequence tagging constituent parsers on the English and Chinese Penn Treebanks, and reduce their parsing time even further.

Multi-Task Learning Sentence

Discriminator at SemEval-2018 Task 10: Minimally Supervised Discrimination

no code implementations SEMEVAL 2018 Artur Kulmizev, Mostafa Abdou, Vinit Ravishankar, Malvina Nissim

We participated to the SemEval-2018 shared task on capturing discriminative attributes (Task 10) with a simple system that ranked 8th amongst the 26 teams that took part in the evaluation.

Cannot find the paper you are looking for? You can Submit a new open access paper.