Interpretable Machine Learning
183 papers with code • 1 benchmarks • 4 datasets
The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.
Source: Assessing the Local Interpretability of Machine Learning Models
Libraries
Use these libraries to find Interpretable Machine Learning models and implementationsLatest papers
Interpretable Machine Learning for TabPFN
The recently developed Prior-Data Fitted Networks (PFNs) have shown very promising results for applications in low-data regimes.
Interpretable Machine Learning for Survival Analysis
With the spread and rapid advancement of black box machine learning models, the field of interpretable machine learning (IML) or explainable artificial intelligence (XAI) has become increasingly important over the last decade.
Rethinking Interpretability in the Era of Large Language Models
We highlight two emerging research priorities for LLM interpretation: using LLMs to directly analyze new datasets and to generate interactive explanations.
PruneSymNet: A Symbolic Neural Network and Pruning Algorithm for Symbolic Regression
Therefore, a greedy pruning algorithm is proposed to prune the network into a subnetwork while ensuring the accuracy of data fitting.
Air Quality Forecasting Using Machine Learning: A Global perspective with Relevance to Low-Resource Settings
The evaluation of several machine learning models demonstrates the effectiveness of the Random Forest algorithm in generating reliable predictions, particularly when applied to classification rather than regression, approach which enhances the model's generalizability by 42%, achieving a cross-validation score of 0. 38 for regression and 0. 89 for classification.
Q-SENN: Quantized Self-Explaining Neural Networks
Explanations in Computer Vision are often desired, but most Deep Neural Networks can only provide saliency maps with questionable faithfulness.
Perceptual Musical Features for Interpretable Audio Tagging
In the age of music streaming platforms, the task of automatically tagging music audio has garnered significant attention, driving researchers to devise methods aimed at enhancing performance metrics on standard datasets.
GFN-SR: Symbolic Regression with Generative Flow Networks
Symbolic regression (SR) is an area of interpretable machine learning that aims to identify mathematical expressions, often composed of simple functions, that best fit in a given set of covariates $X$ and response $y$.
Modelling wildland fire burn severity in California using a spatial Super Learner approach
We develop a machine learning model to predict post-fire burn severity using pre-fire remotely sensed data.
Neural Network Pruning by Gradient Descent
The rapid increase in the parameters of deep learning models has led to significant costs, challenging computational efficiency and model interpretability.