Feature Importance
313 papers with code • 6 benchmarks • 6 datasets
Libraries
Use these libraries to find Feature Importance models and implementationsMost implemented papers
FiBiNET: Combining Feature Importance and Bilinear feature Interaction for Click-Through Rate Prediction
In this paper, a new model named FiBiNET as an abbreviation for Feature Importance and Bilinear feature Interaction NETwork is proposed to dynamically learn the feature importance and fine-grained feature interactions.
A Unified Approach to Interpreting Model Predictions
Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications.
RISE: Randomized Input Sampling for Explanation of Black-box Models
We compare our approach to state-of-the-art importance extraction methods using both an automatic deletion/insertion metric and a pointing metric based on human-annotated object segments.
FAT-DeepFFM: Field Attentive Deep Field-aware Factorization Machine
Although some CTR model such as Attentional Factorization Machine (AFM) has been proposed to model the weight of second order interaction features, we posit the evaluation of feature importance before explicit feature interaction procedure is also important for CTR prediction tasks because the model can learn to selectively highlight the informative features and suppress less useful ones if the task has many input features.
Attention is not Explanation
Attention mechanisms have seen wide adoption in neural NLP models.
Interpretable machine learning: definitions, methods, and applications
Official code for using / reproducing ACD (ICLR 2019) from the paper "Hierarchical interpretations for neural network predictions" https://arxiv. org/abs/1806. 05337
Patient2Vec: A Personalized Interpretable Deep Representation of the Longitudinal Electronic Health Record
The wide implementation of electronic health record (EHR) systems facilitates the collection of large-scale health data from real clinical settings.
Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees
Tree ensembles, such as random forests and AdaBoost, are ubiquitous machine learning models known for achieving strong predictive performance across a wide variety of domains.
Distributed and parallel time series feature extraction for industrial big data applications
This problem is especially hard to solve for time series classification and regression in industrial applications such as predictive maintenance or production line optimization, for which each label or regression target is associated with several time series and meta-information simultaneously.
A Benchmark for Interpretability Methods in Deep Neural Networks
We propose an empirical measure of the approximate accuracy of feature importance estimates in deep neural networks.