Search Results for author: David Harbecke

Found 7 papers, 7 papers with code

Considering Likelihood in NLP Classification Explanations with Occlusion and Language Modeling

1 code implementation ACL 2020 David Harbecke, Christoph Alt

Recently, state-of-the-art NLP models gained an increasing syntactic and semantic understanding of language, and explanation methods are crucial to understand their decisions.

General Classification Language Modelling

Layerwise Relevance Visualization in Convolutional Text Graph Classifiers

1 code implementation WS 2019 Robert Schwarzenberg, Marc Hübner, David Harbecke, Christoph Alt, Leonhard Hennig

Representations in the hidden layers of Deep Neural Networks (DNN) are often hard to interpret since it is difficult to project them into an interpretable domain.

Neural Vector Conceptualization for Word Vector Space Interpretation

1 code implementation WS 2019 Robert Schwarzenberg, Lisa Raithel, David Harbecke

Distributed word vector spaces are considered hard to interpret which hinders the understanding of natural language processing (NLP) models.

Natural Language Processing

Learning Explanations from Language Data

1 code implementation WS 2018 David Harbecke, Robert Schwarzenberg, Christoph Alt

PatternAttribution is a recent method, introduced in the vision domain, that explains classifications of deep neural networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.