Search Results for author: Leila Arras

Found 8 papers, 7 papers with code

Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI

2 code implementations16 Mar 2020 Leila Arras, Ahmed Osman, Wojciech Samek

The rise of deep learning in today's applications entailed an increasing need in explaining the model's decisions beyond prediction performances in order to foster trust and accountability.

Benchmarking Explainable Artificial Intelligence (XAI) +4

Explaining and Interpreting LSTMs

no code implementations25 Sep 2019 Leila Arras, Jose A. Arjona-Medina, Michael Widrich, Grégoire Montavon, Michael Gillhofer, Klaus-Robert Müller, Sepp Hochreiter, Wojciech Samek

While neural networks have acted as a strong unifying force in the design of modern AI systems, the neural network architectures themselves remain highly heterogeneous due to the variety of tasks to be solved.

Evaluating Recurrent Neural Network Explanations

1 code implementation WS 2019 Leila Arras, Ahmed Osman, Klaus-Robert Müller, Wojciech Samek

Recently, several methods have been proposed to explain the predictions of recurrent neural networks (RNNs), in particular of LSTMs.

Negation Sentence +1

Discovering topics in text datasets by visualizing relevant words

1 code implementation18 Jul 2017 Franziska Horn, Leila Arras, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek

When dealing with large collections of documents, it is imperative to quickly get an overview of the texts' contents.

Clustering

Exploring text datasets by visualizing relevant words

2 code implementations17 Jul 2017 Franziska Horn, Leila Arras, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek

When working with a new dataset, it is important to first explore and familiarize oneself with it, before applying any advanced machine learning algorithms.

Explaining Recurrent Neural Network Predictions in Sentiment Analysis

1 code implementation WS 2017 Leila Arras, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek

Recently, a technique called Layer-wise Relevance Propagation (LRP) was shown to deliver insightful explanations in the form of input space relevances for understanding feed-forward neural network classification decisions.

General Classification Interpretable Machine Learning +1

Explaining Predictions of Non-Linear Classifiers in NLP

1 code implementation WS 2016 Leila Arras, Franziska Horn, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek

Layer-wise relevance propagation (LRP) is a recently proposed technique for explaining predictions of complex non-linear classifiers in terms of input variables.

General Classification Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.