1 code implementation • WS 2018 • David Harbecke, Robert Schwarzenberg, Christoph Alt
PatternAttribution is a recent method, introduced in the vision domain, that explains classifications of deep neural networks.
1 code implementation • NAACL 2019 • Robert Schwarzenberg, David Harbecke, Vivien Macketanz, Eleftherios Avramidis, Sebastian Möller
Evaluating translation models is a trade-off between effort and detail.
1 code implementation • WS 2019 • Robert Schwarzenberg, Lisa Raithel, David Harbecke
Distributed word vector spaces are considered hard to interpret which hinders the understanding of natural language processing (NLP) models.
1 code implementation • WS 2019 • Robert Schwarzenberg, Marc Hübner, David Harbecke, Christoph Alt, Leonhard Hennig
Representations in the hidden layers of Deep Neural Networks (DNN) are often hard to interpret since it is difficult to project them into an interpretable domain.
1 code implementation • ACL 2020 • David Harbecke, Christoph Alt
Recently, state-of-the-art NLP models gained an increasing syntactic and semantic understanding of language, and explanation methods are crucial to understand their decisions.
1 code implementation • 28 Jan 2021 • David Harbecke
We present a novel explanation method, called OLM, for natural language processing classifiers.
1 code implementation • nlppower (ACL) 2022 • David Harbecke, Yuxuan Chen, Leonhard Hennig, Christoph Alt
Relation classification models are conventionally evaluated using only a single measure, e. g., micro-F1, macro-F1 or AUC.
1 code implementation • 25 Oct 2022 • Yuxuan Chen, David Harbecke, Leonhard Hennig
Prompting pre-trained language models has achieved impressive performance on various NLP tasks, especially in low data regimes.