|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We propose an empirical measure of the approximate accuracy of feature importance estimates in deep neural networks.
Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications.
This problem is especially hard to solve for time series classification and regression in industrial applications such as predictive maintenance or production line optimization, for which each label or regression target is associated with several time series and meta-information simultaneously.
In this paper, a new model named FiBiNET as an abbreviation for Feature Importance and Bilinear feature Interaction NETwork is proposed to dynamically learn the feature importance and fine-grained feature interactions.
Ranked #5 on Click-Through Rate Prediction on Criteo
These feature attributions convey how important a feature is to changing the classification outcome of a model, especially on whether a subset of features is necessary and/or sufficient for that change, which feature attribution methods are unable to provide.
Official code for using / reproducing ACD (ICLR 2019) from the paper "Hierarchical interpretations for neural network predictions" https://arxiv. org/abs/1806. 05337
This method, named QuickSelection, introduces the strength of the neuron in sparse neural networks as a criterion to measure the feature importance.
Model-agnostic interpretation techniques allow us to explain the behavior of any predictive model.
Deep neural networks (DNNs) have achieved impressive predictive performance due to their ability to learn complex, non-linear relationships between variables.