Search Results for author: Christoph Alt

Found 11 papers, 10 papers with code

A Comparative Study of Pre-trained Encoders for Low-Resource Named Entity Recognition

1 code implementation RepL4NLP (ACL) 2022 Yuxuan Chen, Jonas Mikkelsen, Arne Binder, Christoph Alt, Leonhard Hennig

Pre-trained language models (PLM) are effective components of few-shot named entity recognition (NER) approaches when augmented with continued pre-training on task-specific out-of-domain data or fine-tuning on in-domain data.

Contrastive Learning Low Resource Named Entity Recognition +3

Considering Likelihood in NLP Classification Explanations with Occlusion and Language Modeling

1 code implementation ACL 2020 David Harbecke, Christoph Alt

Recently, state-of-the-art NLP models gained an increasing syntactic and semantic understanding of language, and explanation methods are crucial to understand their decisions.

General Classification Language Modelling

Probing Linguistic Features of Sentence-Level Representations in Neural Relation Extraction

2 code implementations ACL 2020 Christoph Alt, Aleksandra Gabryszak, Leonhard Hennig

Despite the recent progress, little is known about the features captured by state-of-the-art neural relation extraction (RE) models.

Relation Extraction

Layerwise Relevance Visualization in Convolutional Text Graph Classifiers

1 code implementation WS 2019 Robert Schwarzenberg, Marc Hübner, David Harbecke, Christoph Alt, Leonhard Hennig

Representations in the hidden layers of Deep Neural Networks (DNN) are often hard to interpret since it is difficult to project them into an interpretable domain.

Improving Relation Extraction by Pre-trained Language Representations

1 code implementation Automated Knowledge Base Construction Conference 2019 Christoph Alt, Marc Hübner, Leonhard Hennig

Unlike previous relation extraction models, TRE uses pre-trained deep language representations instead of explicit linguistic features to inform the relation classification and combines it with the self-attentive Transformer architecture to effectively model long-range dependencies between entity mentions.

Unsupervised Pre-training

Learning Explanations from Language Data

1 code implementation WS 2018 David Harbecke, Robert Schwarzenberg, Christoph Alt

PatternAttribution is a recent method, introduced in the vision domain, that explains classifications of deep neural networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.