Search Results for author: Ian E. Nielsen

Found 4 papers, 0 papers with code

Targeted Background Removal Creates Interpretable Feature Visualizations

no code implementations22 Jun 2023 Ian E. Nielsen, Erik Grundeland, Joseph Snedeker, Ghulam Rasool, Ravi P. Ramachandran

Feature visualization is used to visualize learned features for black box machine learning models.

EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust and Non-Robust Models

no code implementations15 Mar 2023 Ian E. Nielsen, Ravi P. Ramachandran, Nidhal Bouaynaya, Hassan M. Fathallah-Shaykh, Ghulam Rasool

The expansion of explainable artificial intelligence as a field of research has generated numerous methods of visualizing and understanding the black box of a machine learning model.

Explainable artificial intelligence

Transformers in Time-series Analysis: A Tutorial

no code implementations28 Apr 2022 Sabeen Ahmed, Ian E. Nielsen, Aakash Tripathi, Shamoon Siddiqui, Ghulam Rasool, Ravi P. Ramachandran

Transformer architecture has widespread applications, particularly in Natural Language Processing and computer vision.

Time Series Time Series Analysis

Robust Explainability: A Tutorial on Gradient-Based Attribution Methods for Deep Neural Networks

no code implementations23 Jul 2021 Ian E. Nielsen, Dimah Dera, Ghulam Rasool, Nidhal Bouaynaya, Ravi P. Ramachandran

Later, we discuss how gradient-based methods can be evaluated for their robustness and the role that adversarial robustness plays in having meaningful explanations.

Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.