Search Results for author: Verna Dankers

Found 17 papers, 11 papers with code

Generalising to German Plural Noun Classes, from the Perspective of a Recurrent Neural Network

1 code implementation CoNLL (EMNLP) 2021 Verna Dankers, Anna Langedijk, Kate McCurdy, Adina Williams, Dieuwke Hupkes

Inflectional morphology has since long been a useful testing ground for broader questions about generalisation in language and the viability of neural network models as cognitive models of language.

Generalisation First, Memorisation Second? Memorisation Localisation for Natural Language Classification Tasks

no code implementations9 Aug 2024 Verna Dankers, Ivan Titov

Memorisation is a natural part of learning from real-world data: neural models pick up on atypical input-output combinations and store those training examples in their parameter space.

Image Classification

Evaluating Subword Tokenization: Alien Subword Composition and OOV Generalization Challenge

1 code implementation20 Apr 2024 Khuyagbaatar Batsuren, Ekaterina Vylomova, Verna Dankers, Tsetsuukhei Delgerbaatar, Omri Uzan, Yuval Pinter, Gábor Bella

Our empirical findings show that the accuracy of UniMorph Labeller is 98%, and that, in all language models studied (including ALBERT, BERT, RoBERTa, and DeBERTa), alien tokenization leads to poorer generalizations compared to morphological tokenization for semantic compositionality of word meanings.

text-classification Text Classification

Latent Feature-based Data Splits to Improve Generalisation Evaluation: A Hate Speech Detection Case Study

1 code implementation16 Nov 2023 Maike Züfle, Verna Dankers, Ivan Titov

We challenge hate speech models via new train-test splits of existing datasets that rely on the clustering of models' hidden representations.

Hate Speech Detection

Memorisation Cartography: Mapping out the Memorisation-Generalisation Continuum in Neural Machine Translation

no code implementations9 Nov 2023 Verna Dankers, Ivan Titov, Dieuwke Hupkes

When training a neural network, it will quickly memorise some source-target mappings from your dataset but never learn some others.

counterfactual Machine Translation +2

Non-Compositionality in Sentiment: New Data and Analyses

1 code implementation31 Oct 2023 Verna Dankers, Christopher G. Lucas

When natural language phrases are combined, their meaning is often more than the sum of their parts.

Sentiment Analysis

Recursive Neural Networks with Bottlenecks Diagnose (Non-)Compositionality

1 code implementation31 Jan 2023 Verna Dankers, Ivan Titov

We illustrate that comparing data's representations in models with and without the bottleneck can be used to produce a compositionality metric.

Sentiment Analysis Sentiment Classification

Text Characterization Toolkit

no code implementations4 Oct 2022 Daniel Simig, Tianlu Wang, Verna Dankers, Peter Henderson, Khuyagbaatar Batsuren, Dieuwke Hupkes, Mona Diab

In NLP, models are usually evaluated by reporting single-number performance scores on a number of readily available benchmarks, without much deeper analysis.

Can Transformer be Too Compositional? Analysing Idiom Processing in Neural Machine Translation

1 code implementation ACL 2022 Verna Dankers, Christopher G. Lucas, Ivan Titov

In this work, we investigate whether the non-compositionality of idioms is reflected in the mechanics of the dominant NMT model, Transformer, by analysing the hidden states and attention patterns for models with English as source language and one of seven European languages as target language.

Machine Translation NMT +1

Meta-Learning for Fast Cross-Lingual Adaptation in Dependency Parsing

1 code implementation ACL 2022 Anna Langedijk, Verna Dankers, Phillip Lippe, Sander Bos, Bryan Cardenas Guevara, Helen Yannakoudakis, Ekaterina Shutova

Meta-learning, or learning to learn, is a technique that can help to overcome resource scarcity in cross-lingual NLP problems, by enabling fast adaptation to new tasks.

Dependency Parsing Few-Shot Learning

The Pragmatics behind Politics: Modelling Metaphor, Framing and Emotion in Political Discourse

1 code implementation Findings of the Association for Computational Linguistics 2020 Pere-Llu{\'\i}s Huguet Cabot, Verna Dankers, David Abadi, Agneta Fischer, Ekaterina Shutova

There has been an increased interest in modelling political discourse within the natural language processing (NLP) community, in tasks such as political bias and misinformation detection, among others.

Misinformation

Being neighbourly: Neural metaphor identification in discourse

no code implementations WS 2020 Verna Dankers, Karan Malhotra, Gaurav Kudva, Volodymyr Medentsiy, Ekaterina Shutova

Existing approaches to metaphor processing typically rely on local features, such as immediate lexico-syntactic contexts or information within a given sentence.

POS Sentence

Compositionality decomposed: how do neural networks generalise?

1 code implementation22 Aug 2019 Dieuwke Hupkes, Verna Dankers, Mathijs Mul, Elia Bruni

Despite a multitude of empirical studies, little consensus exists on whether neural networks are able to generalise compositionally, a controversy that, in part, stems from a lack of agreement about what it means for a neural model to be compositional.

Transcoding compositionally: using attention to find more generalizable solutions

1 code implementation WS 2019 Kris Korrel, Dieuwke Hupkes, Verna Dankers, Elia Bruni

While sequence-to-sequence models have shown remarkable generalization power across several natural language tasks, their construct of solutions are argued to be less compositional than human-like generalization.

Decoder

Cannot find the paper you are looking for? You can Submit a new open access paper.