no code implementations • 10 Jun 2024 • Martin Courtois, Malte Ostendorff, Leonhard Hennig, Georg Rehm
In this work, we propose an alternative compatibility function for the self-attention mechanism introduced by the Transformer architecture.
1 code implementation • 16 May 2024 • Arne Binder, Leonhard Hennig, Christoph Alt
The objective of Information Extraction (IE) is to derive structured representations from unstructured or semi-structured documents.
1 code implementation • 23 Jan 2024 • Qianli Wang, Tatiana Anikina, Nils Feldhus, Josef van Genabith, Leonhard Hennig, Sebastian Möller
Interpretability tools that offer explanations in the form of a dialogue have demonstrated their efficacy in enhancing users' understanding (Slack et al., 2023; Shen et al., 2023), as one-off explanations may fall short in providing sufficient information to the user.
no code implementations • 17 Aug 2023 • Mohammed Bin Sumait, Aleksandra Gabryszak, Leonhard Hennig, Roland Roller
Factuality can play an important role when automatically processing clinical text, as it makes a difference if particular symptoms are explicitly not present, possibly present, not mentioned, or affirmed.
1 code implementation • 8 May 2023 • Leonhard Hennig, Philippe Thomas, Sebastian Möller
Relation extraction (RE) is a fundamental task in information extraction, whose extension to multilingual settings has been hindered by the lack of supervised resources comparable in size to large English datasets such as TACRED (Zhang et al., 2017).
1 code implementation • 25 Oct 2022 • Yuxuan Chen, David Harbecke, Leonhard Hennig
Prompting pre-trained language models has achieved impressive performance on various NLP tasks, especially in low data regimes.
1 code implementation • 24 Oct 2022 • Arne Binder, Bhuvanesh Verma, Leonhard Hennig
In this work, we introduce a sequential pipeline model combining ADUR and ARE for full-text SAM, and provide a first analysis of the performance of pretrained language models (PLMs) on both subtasks.
no code implementations • 14 Oct 2022 • Abdel Aziz Taha, Leonhard Hennig, Petr Knoth
In this paper, we propose novel methods that, given a neural network classification model, estimate uncertainty of particular predictions generated by this model.
1 code implementation • 13 Oct 2022 • Nils Feldhus, Leonhard Hennig, Maximilian Dustin Nasert, Christopher Ebert, Robert Schwarzenberg, Sebastian Möller
Saliency maps can explain a neural model's predictions by identifying important input features.
1 code implementation • nlppower (ACL) 2022 • David Harbecke, Yuxuan Chen, Leonhard Hennig, Christoph Alt
Relation classification models are conventionally evaluated using only a single measure, e. g., micro-F1, macro-F1 or AUC.
1 code implementation • RepL4NLP (ACL) 2022 • Yuxuan Chen, Jonas Mikkelsen, Arne Binder, Christoph Alt, Leonhard Hennig
Pre-trained language models (PLM) are effective components of few-shot named entity recognition (NER) approaches when augmented with continued pre-training on task-specific out-of-domain data or fine-tuning on in-domain data.
Contrastive Learning Low Resource Named Entity Recognition +4
1 code implementation • KONVENS (WS) 2021 • Leonhard Hennig, Phuc Tran Truong, Aleksandra Gabryszak
We present MobIE, a German-language dataset, which is human-annotated with 20 coarse- and fine-grained entity types and entity linking information for geographically linkable entities.
1 code implementation • SEMEVAL 2020 • Marc Hübner, Christoph Alt, Robert Schwarzenberg, Leonhard Hennig
Definition Extraction systems are a valuable knowledge source for both humans and algorithms.
no code implementations • WS 2020 • Hanchu Zhang, Leonhard Hennig, Christoph Alt, Changjian Hu, Yao Meng, Chao Wang
Named Entity Recognition (NER) in domains like e-commerce is an understudied problem due to the lack of annotated datasets.
1 code implementation • ACL 2020 • Christoph Alt, Aleksandra Gabryszak, Leonhard Hennig
TACRED (Zhang et al., 2017) is one of the largest, most widely used crowdsourced datasets in Relation Extraction (RE).
2 code implementations • ACL 2020 • Christoph Alt, Aleksandra Gabryszak, Leonhard Hennig
Despite the recent progress, little is known about the features captured by state-of-the-art neural relation extraction (RE) models.
1 code implementation • 8 Apr 2020 • Johannes Kirschnick, Philippe Thomas, Roland Roller, Leonhard Hennig
Recent years showed a strong increase in biomedical sciences and an inherent increase in publication volume.
no code implementations • LREC 2018 • Martin Schiersch, Veselina Mironova, Maximilian Schmitt, Philippe Thomas, Aleksandra Gabryszak, Leonhard Hennig
Monitoring mobility- and industry-relevant events is important in areas such as personal travel planning and supply chain management, but extracting events pertaining to specific companies, transit routes and locations from heterogeneous, high-volume text streams remains a significant challenge.
no code implementations • LREC 2018 • Saskia Schön, Veselina Mironova, Aleksandra Gabryszak, Leonhard Hennig
Recognizing non-standard entity types and relations, such as B2B products, product classes and their producers, in news and forum texts is important in application areas such as supply chain monitoring and market research.
1 code implementation • LREC 2020 • Dmitrii Aksenov, Julián Moreno-Schneider, Peter Bourgonje, Robert Schwarzenberg, Leonhard Hennig, Georg Rehm
The results of our models are compared to a baseline and the state-of-the-art models on the CNN/Daily Mail dataset.
1 code implementation • WS 2019 • Robert Schwarzenberg, Marc Hübner, David Harbecke, Christoph Alt, Leonhard Hennig
Representations in the hidden layers of Deep Neural Networks (DNN) are often hard to interpret since it is difficult to project them into an interpretable domain.
1 code implementation • ACL 2019 • Christoph Alt, Marc Hübner, Leonhard Hennig
Distantly supervised relation extraction is widely used to extract relational facts from text, but suffers from noisy labels.
1 code implementation • Automated Knowledge Base Construction Conference 2019 • Christoph Alt, Marc Hübner, Leonhard Hennig
Unlike previous relation extraction models, TRE uses pre-trained deep language representations instead of explicit linguistic features to inform the relation classification and combines it with the self-attentive Transformer architecture to effectively model long-range dependencies between entity mentions.
Ranked #24 on Relation Extraction on SemEval-2010 Task-8
no code implementations • WS 2018 • Nils Rethmeier, Marc H{\"u}bner, Leonhard Hennig
Comments on web news contain controversies that manifest as inter-group agreement-conflicts.
no code implementations • RANLP 2017 • Philippe Thomas, Johannes Kirschnick, Leonhard Hennig, Renlong Ai, Sven Schmeier, Holmer Hemsen, Feiyu Xu, Hans Uszkoreit
We also present promising experimental results for the event extraction component of our system, which recognizes a novel set of event types.
no code implementations • EACL 2017 • Hans Uszkoreit, Aleks Gabryszak, ra, Leonhard Hennig, J{\"o}rg Steffen, Renlong Ai, Stephan Busemann, Jon Dehdari, Josef van Genabith, Georg Heigold, Nils Rethmeier, Raphael Rubino, Sven Schmeier, Philippe Thomas, He Wang, Feiyu Xu
Web debates play an important role in enabling broad participation of constituencies in social, political and economic decision-taking.
no code implementations • ACL 2016 • Leonhard Hennig, Philippe Thomas, Renlong Ai, Johannes Kirschnick, He Wang, Jakob Pannier, Nora Zimmermann, Sven Schmeier, Feiyu Xu, Jan Ostwald, Hans Uszkoreit
no code implementations • LREC 2016 • Kathrin Eichler, Feiyu Xu, Hans Uszkoreit, Leonhard Hennig, Sebastian Krause
Some express a relation that entails the target relation.
no code implementations • LREC 2016 • Aleks Gabryszak, ra, Sebastian Krause, Leonhard Hennig, Feiyu Xu, Hans Uszkoreit
Recent research shows the importance of linking linguistic knowledge resources for the creation of large-scale linguistic data.
no code implementations • LREC 2012 • Danuta Ploch, Leonhard Hennig, Angelina Duka, Ernesto William De Luca, Sahin Albayrak
Determining the real-world referents for name mentions of persons, organizations and other named entities in texts has become an important task in many information retrieval scenarios and is referred to as Named Entity Disambiguation (NED).