Search Results for author: Laura Kallmeyer

Found 45 papers, 12 papers with code

CogALex-V Shared Task: GHHH - Detecting Semantic Relations via Word Embeddings

no code implementations WS 2016 Mohammed Attia, Suraj Maharjan, Younes Samih, Laura Kallmeyer, Thamar Solorio

The evaluation results of our system on the test set is 88. 1{\%} (79. 0{\%} for TRUE only) f-measure for Task-1 on detecting semantic similarity, and 76. 0{\%} (42. 3{\%} when excluding RANDOM) for Task-2 on identifying finer-grained semantic relations.

Binary Classification General Classification +7

A Neural Architecture for Dialectal Arabic Segmentation

no code implementations WS 2017 Younes Samih, Mohammed Attia, Mohamed Eldesouki, Ahmed Abdelali, Hamdy Mubarak, Laura Kallmeyer, Kareem Darwish

The automated processing of Arabic Dialects is challenging due to the lack of spelling standards and to the scarcity of annotated data and resources in general.

Machine Translation Morphological Analysis +2

Sketching Word Vectors Through Hashing

1 code implementation11 May 2017 Behrang QasemiZadeh, Laura Kallmeyer

We propose a new fast word embedding technique using hash functions.

Learning from Relatives: Unified Dialectal Arabic Segmentation

no code implementations CONLL 2017 Younes Samih, Mohamed Eldesouki, Mohammed Attia, Kareem Darwish, Ahmed Abdelali, Hamdy Mubarak, Laura Kallmeyer

Arabic dialects do not just share a common koin{\'e}, but there are shared pan-dialectal linguistic phenomena that allow computational models for dialects to learn from each other.

Dialect Identification Information Retrieval +2

HHU at SemEval-2017 Task 2: Fast Hash-Based Embeddings for Semantic Word Similarity Assessment

no code implementations SEMEVAL 2017 Behrang QasemiZadeh, Laura Kallmeyer

This paper describes the HHU system that participated in Task 2 of SemEval 2017, Multilingual and Cross-lingual Semantic Word Similarity.

Learning Word Embeddings Semantic Textual Similarity +2

TRAPACC and TRAPACCS at PARSEME Shared Task 2018: Neural Transition Tagging of Verbal Multiword Expressions

no code implementations COLING 2018 Regina Stodden, Behrang Qasemizadeh, Laura Kallmeyer

We describe the TRAPACC system and its variant TRAPACCS that participated in the closed track of the PARSEME Shared Task 2018 on labeling verbal multiword expressions (VMWEs).

Dimensionality Reduction Lexical Analysis

Towards a Compositional Analysis of German Light Verb Constructions (LVCs) Combining Lexicalized Tree Adjoining Grammar (LTAG) with Frame Semantics

no code implementations WS 2019 Jens Fleischhauer, Thomas Gamerschlag, Laura Kallmeyer, Simon Petitjean

Complex predicates formed of a semantically {`}light{'} verbal head and a noun or verb which contributes the major part of the meaning are frequently referred to as {`}light verb constructions{'} (LVCs).

Semantic Composition

SemEval-2019 Task 2: Unsupervised Lexical Frame Induction

no code implementations SEMEVAL 2019 Behrang QasemiZadeh, Miriam R. L. Petruck, Regina Stodden, Laura Kallmeyer, C, Marie ito

This paper presents Unsupervised Lexical Frame Induction, Task 2 of the International Workshop on Semantic Evaluation in 2019.

Clustering Task 2

A Neural Graph-based Approach to Verbal MWE Identification

1 code implementation WS 2019 Jakub Waszczuk, Rafael Ehren, Regina Stodden, Laura Kallmeyer

We propose to tackle the problem of verbal multiword expression (VMWE) identification using a neural graph parsing-based approach.

A multi-lingual and cross-domain analysis of features for text simplification

no code implementations LREC 2020 Regina Stodden, Laura Kallmeyer

In text simplification and readability research, several features have been proposed to estimate or simplify a complex text, e. g., readability scores, sentence length, or proportion of POS tags.

POS Sentence +1

Supervised Disambiguation of German Verbal Idioms with a BiLSTM Architecture

no code implementations WS 2020 Rafael Ehren, Timm Lichte, Laura Kallmeyer, Jakub Waszczuk

Supervised disambiguation of verbal idioms (VID) poses special demands on the quality and quantity of the annotated data used for learning and evaluation.

Statistical Parsing of Tree Wrapping Grammars

1 code implementation COLING 2020 Tatiana Bladier, Jakub Waszczuk, Laura Kallmeyer

We describe an approach to statistical parsing with Tree-Wrapping Grammars (TWG).

Corpus-based Identification of Verbs Participating in Verb Alternations Using Classification and Manual Annotation

no code implementations COLING 2020 Esther Seyffarth, Laura Kallmeyer

We use ENCOW and VerbNet data to train classifiers to predict the instrument subject alternation and the causative-inchoative alternation, relying on count-based and vector-based features as well as perplexity-based language model features, which are intended to reflect each alternation{'}s felicity by simulating it.

Language Modelling

Probing for Constituency Structure in Neural Language Models

1 code implementation13 Apr 2022 David Arps, Younes Samih, Laura Kallmeyer, Hassan Sajjad

We find that 4 pretrained transfomer LMs obtain high performance on our probing tasks even on manipulated data, suggesting that semantic and syntactic knowledge in their representations can be separated and that constituency information is in fact learned by the LM.

Increasing The Performance of Cognitively Inspired Data-Efficient Language Models via Implicit Structure Building

1 code implementation31 Oct 2023 Omar Momen, David Arps, Laura Kallmeyer

In this paper, we describe our submission to the BabyLM Challenge 2023 shared task on data-efficient language model (LM) pretraining (Warstadt et al., 2023).

Language Modelling Sentence

Multilingual Nonce Dependency Treebanks: Understanding how LLMs represent and process syntactic structure

no code implementations13 Nov 2023 David Arps, Laura Kallmeyer, Younes Samih, Hassan Sajjad

We replicate the findings of M\"uller-Eberstein et al. (2022) on nonce test data and show that the performance declines on both MLMs and ALMs wrt.

Dissecting Paraphrases: The Impact of Prompt Syntax and supplementary Information on Knowledge Retrieval from Pretrained Language Models

no code implementations2 Apr 2024 Stephan Linzbach, Dimitar Dimitrov, Laura Kallmeyer, Kilian Evang, Hajira Jabeen, Stefan Dietze

Typically, designing these prompts is a tedious task because small differences in syntax or semantics can have a substantial impact on knowledge retrieval performance.

Retrieval

An Analysis of Attention in German Verbal Idiom Disambiguation

1 code implementation LREC (MWE) 2022 Rafael Ehren, Laura Kallmeyer, Timm Lichte

In this paper we examine a BiLSTM architecture for disambiguating verbal potentially idiomatic expressions (PIEs) as to whether they are used in a literal or an idiomatic reading with respect to explainability of its decisions.

POS Sentence

Implicit representations of event properties within contextual language models: Searching for “causativity neurons”

1 code implementation IWCS (ACL) 2021 Esther Seyffarth, Younes Samih, Laura Kallmeyer, Hassan Sajjad

This paper addresses the question to which extent neural contextual language models such as BERT implicitly represent complex semantic properties.

Sentence

RRGparbank: A Parallel Role and Reference Grammar Treebank

1 code implementation LREC 2022 Tatiana Bladier, Kilian Evang, Valeria Generalova, Zahra Ghane, Laura Kallmeyer, Robin Möllemann, Natalia Moors, Rainer Osswald, Simon Petitjean

This paper describes the first release of RRGparbank, a multilingual parallel treebank for Role and Reference Grammar (RRG) containing annotations of George Orwell’s novel 1984 and its translations.

Improving Low-resource RRG Parsing with Cross-lingual Self-training

no code implementations COLING 2022 Kilian Evang, Laura Kallmeyer, Jakub Waszczuk, Kilu von Prince, Tatiana Bladier, Simon Petitjean

Starting from an existing RRG parser, we propose two strategies for low-resource parsing: first, we extend the parsing model into a cross-lingual parser, exploiting the parallel data in the high-resource language and unsupervised word alignments by providing internal states of the source-language parser to the target-language parser.

Constituency Parsing

Cannot find the paper you are looking for? You can Submit a new open access paper.