Feature Importance

246 papers with code • 6 benchmarks • 5 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Feature Importance models and implementations

Most implemented papers

Understanding Global Feature Contributions With Additive Importance Measures

iancovert/sage NeurIPS 2020

Understanding the inner workings of complex machine learning models is a long-standing problem and most recent research has focused on local interpretability.

Efficient nonparametric statistical inference on population feature importance using Shapley values

bdwilliamson/vimp ICML 2020

The true population-level importance of a variable in a prediction task provides useful knowledge about the underlying data-generating mechanism and can help in deciding which measurements to collect in subsequent experiments.

Relative Feature Importance

gcskoenig/icpr2020-rfi 16 Jul 2020

Interpretable Machine Learning (IML) methods are used to gain insight into the relevance of a feature of interest for the performance of a model.

Collection and Validation of Psychophysiological Data from Professional and Amateur Players: a Multimodal eSports Dataset

smerdov/eSports_Sensors_Dataset 2 Nov 2020

An important feature of the dataset is simultaneous data collection from five players, which facilitates the analysis of sensor data on a team level.

Feature Importance-aware Transferable Adversarial Attacks

hcguoO0/FIA ICCV 2021

More specifically, we obtain feature importance by introducing the aggregate gradient, which averages the gradients with respect to feature maps of the source model, computed on a batch of random transforms of the original clean image.

Label-Free Explainability for Unsupervised Models

vanderschaarlab/mlforhealthlabpub 3 Mar 2022

Unsupervised black-box models are challenging to interpret.

Interpretable machine learning for time-to-event prediction in medicine and healthcare

modeloriented/survex 17 Mar 2023

Time-to-event prediction, e. g. cancer survival analysis or hospital length of stay, is a highly prominent machine learning task in medical and healthcare applications.

Interpretation of Neural Networks is Fragile

pytorch/captum 29 Oct 2017

In this paper, we show that interpretation of deep learning predictions is extremely fragile in the following sense: two perceptively indistinguishable inputs with the same predicted label can be assigned very different interpretations.

Towards Automatic Concept-based Explanations

amiratag/ACE NeurIPS 2019

Interpretability has become an important topic of research as more machine learning (ML) models are deployed and widely used to make important decisions.

Sampling, Intervention, Prediction, Aggregation: A Generalized Framework for Model-Agnostic Interpretations

koalaverse/vip 8 Apr 2019

Model-agnostic interpretation techniques allow us to explain the behavior of any predictive model.