Search Results for author: Gabrielle Ras

Found 5 papers, 0 papers with code

Hermitry Ratio: Evaluating the validity of perturbation methods for explainable deep learning

no code implementations29 Sep 2021 Gabrielle Ras, Erdi Çallı, Marcel van Gerven

Perturbation methods are model-agnostic methods used to generate heatmaps to explain black-box algorithms such as deep neural networks.

Explainable Artificial Intelligence (XAI) Image Classification

The 3TConv: An Intrinsic Approach to Explainable 3D CNNs

no code implementations1 Jan 2021 Gabrielle Ras, Luca Ambrogioni, Pim Haselager, Marcel van Gerven, Umut Güçlü

In a 3TConv the 3D convolutional filter is obtained by learning a 2D filter and a set of temporal transformation parameters, resulting in a sparse filter requiring less parameters.

Action Recognition

Explainable Deep Learning: A Field Guide for the Uninitiated

no code implementations30 Apr 2020 Gabrielle Ras, Ning Xie, Marcel van Gerven, Derek Doran

The field guide: i) Introduces three simple dimensions defining the space of foundational methods that contribute to explainable deep learning, ii) discusses the evaluations for model explanations, iii) places explainability in the context of other related deep learning research areas, and iv) finally elaborates on user-oriented explanation designing and potential future directions on explainable deep learning.

Decision Making

Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges

no code implementations20 Mar 2018 Gabrielle Ras, Marcel van Gerven, Pim Haselager

Different kinds of users are identified and their concerns revealed, relevant statements from the General Data Protection Regulation are analyzed in the context of Deep Neural Networks (DNNs), a taxonomy for the classification of existing explanation methods is introduced, and finally, the various classes of explanation methods are analyzed to verify if user concerns are justified.

Cannot find the paper you are looking for? You can Submit a new open access paper.