Search Results for author: Daniel Lundstrom

Found 4 papers, 1 papers with code

Four Axiomatic Characterizations of the Integrated Gradients Attribution Method

no code implementations23 Jun 2023 Daniel Lundstrom, Meisam Razaviyayn

Deep neural networks have produced significant progress among machine learning models in terms of accuracy and functionality, but their inner workings are still largely unknown.

Distributing Synergy Functions: Unifying Game-Theoretic Interaction Methods for Machine-Learning Explainability

no code implementations4 May 2023 Daniel Lundstrom, Meisam Razaviyayn

We show that, given modest assumptions, a unique full account of interactions between features, called synergies, is possible in the continuous input setting.

Decision Making Fairness

A Rigorous Study of Integrated Gradients Method and Extensions to Internal Neuron Attributions

1 code implementation24 Feb 2022 Daniel Lundstrom, Tianjian Huang, Meisam Razaviyayn

Attribution methods address the issue of explainability by quantifying the importance of an input feature for a model prediction.

Explainability Tools Enabling Deep Learning in Future In-Situ Real-Time Planetary Explorations

no code implementations15 Jan 2022 Daniel Lundstrom, Alexander Huyen, Arya Mevada, Kyongsik Yun, Thomas Lu

It provides a set of explainability tools (ET) that opens the black box of a DNN so that the individual contribution of neurons to category classification can be ranked and visualized.

Image Segmentation Object Recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.