1 code implementation • CoNLL (EMNLP) 2021 • Verna Dankers, Anna Langedijk, Kate McCurdy, Adina Williams, Dieuwke Hupkes
Inflectional morphology has since long been a useful testing ground for broader questions about generalisation in language and the viability of neural network models as cognitive models of language.
no code implementations • 9 Aug 2024 • Verna Dankers, Ivan Titov
Memorisation is a natural part of learning from real-world data: neural models pick up on atypical input-output combinations and store those training examples in their parameter space.
1 code implementation • 20 Apr 2024 • Khuyagbaatar Batsuren, Ekaterina Vylomova, Verna Dankers, Tsetsuukhei Delgerbaatar, Omri Uzan, Yuval Pinter, Gábor Bella
Our empirical findings show that the accuracy of UniMorph Labeller is 98%, and that, in all language models studied (including ALBERT, BERT, RoBERTa, and DeBERTa), alien tokenization leads to poorer generalizations compared to morphological tokenization for semantic compositionality of word meanings.
1 code implementation • 16 Nov 2023 • Maike Züfle, Verna Dankers, Ivan Titov
We challenge hate speech models via new train-test splits of existing datasets that rely on the clustering of models' hidden representations.
no code implementations • 9 Nov 2023 • Verna Dankers, Ivan Titov, Dieuwke Hupkes
When training a neural network, it will quickly memorise some source-target mappings from your dataset but never learn some others.
1 code implementation • 31 Oct 2023 • Verna Dankers, Christopher G. Lucas
When natural language phrases are combined, their meaning is often more than the sum of their parts.
1 code implementation • 31 Jan 2023 • Verna Dankers, Ivan Titov
We illustrate that comparing data's representations in models with and without the bottleneck can be used to produce a compositionality metric.
no code implementations • 6 Oct 2022 • Dieuwke Hupkes, Mario Giulianelli, Verna Dankers, Mikel Artetxe, Yanai Elazar, Tiago Pimentel, Christos Christodoulopoulos, Karim Lasri, Naomi Saphra, Arabella Sinclair, Dennis Ulmer, Florian Schottmann, Khuyagbaatar Batsuren, Kaiser Sun, Koustuv Sinha, Leila Khalatbari, Maria Ryskina, Rita Frieske, Ryan Cotterell, Zhijing Jin
We present a taxonomy for characterising and understanding generalisation research in NLP.
no code implementations • 4 Oct 2022 • Daniel Simig, Tianlu Wang, Verna Dankers, Peter Henderson, Khuyagbaatar Batsuren, Dieuwke Hupkes, Mona Diab
In NLP, models are usually evaluated by reporting single-number performance scores on a number of readily available benchmarks, without much deeper analysis.
1 code implementation • ACL 2022 • Verna Dankers, Christopher G. Lucas, Ivan Titov
In this work, we investigate whether the non-compositionality of idioms is reflected in the mechanics of the dominant NMT model, Transformer, by analysing the hidden states and attention patterns for models with English as source language and one of seven European languages as target language.
1 code implementation • ACL 2022 • Verna Dankers, Elia Bruni, Dieuwke Hupkes
Obtaining human-like performance in NLP is often argued to require compositional generalisation.
1 code implementation • ACL 2022 • Anna Langedijk, Verna Dankers, Phillip Lippe, Sander Bos, Bryan Cardenas Guevara, Helen Yannakoudakis, Ekaterina Shutova
Meta-learning, or learning to learn, is a technique that can help to overcome resource scarcity in cross-lingual NLP problems, by enabling fast adaptation to new tasks.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Pere-Llu{\'\i}s Huguet Cabot, Verna Dankers, David Abadi, Agneta Fischer, Ekaterina Shutova
There has been an increased interest in modelling political discourse within the natural language processing (NLP) community, in tasks such as political bias and misinformation detection, among others.
no code implementations • WS 2020 • Verna Dankers, Karan Malhotra, Gaurav Kudva, Volodymyr Medentsiy, Ekaterina Shutova
Existing approaches to metaphor processing typically rely on local features, such as immediate lexico-syntactic contexts or information within a given sentence.
no code implementations • IJCNLP 2019 • Verna Dankers, Marek Rei, Martha Lewis, Ekaterina Shutova
Metaphors allow us to convey emotion by connecting physical experiences and abstract concepts.
1 code implementation • 22 Aug 2019 • Dieuwke Hupkes, Verna Dankers, Mathijs Mul, Elia Bruni
Despite a multitude of empirical studies, little consensus exists on whether neural networks are able to generalise compositionally, a controversy that, in part, stems from a lack of agreement about what it means for a neural model to be compositional.
1 code implementation • WS 2019 • Kris Korrel, Dieuwke Hupkes, Verna Dankers, Elia Bruni
While sequence-to-sequence models have shown remarkable generalization power across several natural language tasks, their construct of solutions are argued to be less compositional than human-like generalization.