Feature Importance
246 papers with code • 6 benchmarks • 5 datasets
Libraries
Use these libraries to find Feature Importance models and implementationsMost implemented papers
Understanding Global Feature Contributions With Additive Importance Measures
Understanding the inner workings of complex machine learning models is a long-standing problem and most recent research has focused on local interpretability.
Efficient nonparametric statistical inference on population feature importance using Shapley values
The true population-level importance of a variable in a prediction task provides useful knowledge about the underlying data-generating mechanism and can help in deciding which measurements to collect in subsequent experiments.
Relative Feature Importance
Interpretable Machine Learning (IML) methods are used to gain insight into the relevance of a feature of interest for the performance of a model.
Collection and Validation of Psychophysiological Data from Professional and Amateur Players: a Multimodal eSports Dataset
An important feature of the dataset is simultaneous data collection from five players, which facilitates the analysis of sensor data on a team level.
Feature Importance-aware Transferable Adversarial Attacks
More specifically, we obtain feature importance by introducing the aggregate gradient, which averages the gradients with respect to feature maps of the source model, computed on a batch of random transforms of the original clean image.
Label-Free Explainability for Unsupervised Models
Unsupervised black-box models are challenging to interpret.
Interpretable machine learning for time-to-event prediction in medicine and healthcare
Time-to-event prediction, e. g. cancer survival analysis or hospital length of stay, is a highly prominent machine learning task in medical and healthcare applications.
Interpretation of Neural Networks is Fragile
In this paper, we show that interpretation of deep learning predictions is extremely fragile in the following sense: two perceptively indistinguishable inputs with the same predicted label can be assigned very different interpretations.
Towards Automatic Concept-based Explanations
Interpretability has become an important topic of research as more machine learning (ML) models are deployed and widely used to make important decisions.
Sampling, Intervention, Prediction, Aggregation: A Generalized Framework for Model-Agnostic Interpretations
Model-agnostic interpretation techniques allow us to explain the behavior of any predictive model.