17 papers with code • 0 benchmarks • 0 datasets
These leaderboards are used to track progress in Explainable Models
Discovery of Nonlinear Dynamical Systems using a Runge-Kutta Inspired Dictionary-based Sparse Regression Approach
Discovering dynamical models to describe underlying dynamical behavior is essential to draw decisive conclusions and engineering studies, e. g., optimizing a process.
We propose a multimodal approach to explanation, and argue that the two modalities provide complementary explanatory strengths.
We find that the EQL-based architecture can extrapolate quite well outside of the training data set compared to a standard neural network-based architecture, paving the way for deep learning to be applied in scientific exploration and discovery.
Defining a representative locality is an urgent challenge in perturbation-based explanation methods, which influences the fidelity and soundness of explanations.
Prior work on explainable models in MIR has generally used image processing tools to produce explanations for DNN predictions, but these are not necessarily musically meaningful, or can be listened to (which, arguably, is important in music).
As a second contribution our study reveals limitations of explaining black-box policies via imitation learning with tree-based explainable models, due to its inherent instability.
First, it learns a dictionary from a large collection of shape datasets, making any shape being able to be decomposed into a linear combination through the dictionary.