Interpretable Machine Learning

183 papers with code • 1 benchmarks • 4 datasets

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Libraries

Use these libraries to find Interpretable Machine Learning models and implementations
6 papers
4,496
4 papers
1,277
3 papers
21,419
3 papers
21,417
See all 10 libraries.

Interpretable Machine Learning for TabPFN

david-rundel/tabpfn_iml 16 Mar 2024

The recently developed Prior-Data Fitted Networks (PFNs) have shown very promising results for applications in low-data regimes.

4
16 Mar 2024

Interpretable Machine Learning for Survival Analysis

sophhan/imlsa_2024 15 Mar 2024

With the spread and rapid advancement of black box machine learning models, the field of interpretable machine learning (IML) or explainable artificial intelligence (XAI) has become increasingly important over the last decade.

0
15 Mar 2024

Rethinking Interpretability in the Era of Large Language Models

csinva/imodelsX 30 Jan 2024

We highlight two emerging research priorities for LLM interpretation: using LLMs to directly analyze new datasets and to generate interactive explanations.

72
30 Jan 2024

PruneSymNet: A Symbolic Neural Network and Pruning Algorithm for Symbolic Regression

wumin86/prunesymnet 25 Jan 2024

Therefore, a greedy pruning algorithm is proposed to prune the network into a subnetwork while ensuring the accuracy of data fitting.

0
25 Jan 2024

Air Quality Forecasting Using Machine Learning: A Global perspective with Relevance to Low-Resource Settings

dechrist2021/mulomba 9 Jan 2024

The evaluation of several machine learning models demonstrates the effectiveness of the Random Forest algorithm in generating reliable predictions, particularly when applied to classification rather than regression, approach which enhances the model's generalizability by 42%, achieving a cross-validation score of 0. 38 for regression and 0. 89 for classification.

0
09 Jan 2024

Q-SENN: Quantized Self-Explaining Neural Networks

thomasnorr/q-senn 21 Dec 2023

Explanations in Computer Vision are often desired, but most Deep Neural Networks can only provide saliency maps with questionable faithfulness.

1
21 Dec 2023

Perceptual Musical Features for Interpretable Audio Tagging

vaslyb/perceptible-music-tagging 18 Dec 2023

In the age of music streaming platforms, the task of automatically tagging music audio has garnered significant attention, driving researchers to devise methods aimed at enhancing performance metrics on standard datasets.

5
18 Dec 2023

GFN-SR: Symbolic Regression with Generative Flow Networks

listar2000/gfn-sr 1 Dec 2023

Symbolic regression (SR) is an area of interpretable machine learning that aims to identify mathematical expressions, often composed of simple functions, that best fit in a given set of covariates $X$ and response $y$.

5
01 Dec 2023

Modelling wildland fire burn severity in California using a spatial Super Learner approach

Nicholas-Simafranca/Super_Learner_Wild_Fire 25 Nov 2023

We develop a machine learning model to predict post-fire burn severity using pre-fire remotely sensed data.

0
25 Nov 2023

Neural Network Pruning by Gradient Descent

3riccc/neural_pruning 21 Nov 2023

The rapid increase in the parameters of deep learning models has led to significant costs, challenging computational efficiency and model interpretability.

2
21 Nov 2023