Explainable Models
36 papers with code • 0 benchmarks • 2 datasets
Benchmarks
These leaderboards are used to track progress in Explainable Models
Libraries
Use these libraries to find Explainable Models models and implementationsMost implemented papers
xFraud: Explainable Fraud Transaction Detection
At online retail platforms, it is crucial to actively detect the risks of transactions to improve customer experience and minimize financial loss.
Learning Universal Shape Dictionary for Realtime Instance Segmentation
First, it learns a dictionary from a large collection of shape datasets, making any shape being able to be decomposed into a linear combination through the dictionary.
Relational Boosted Bandits
Contextual bandits algorithms have become essential in real-world user interaction problems in recent years.
EXTRA: Explanation Ranking Datasets for Explainable Recommendation
To achieve a standard way of evaluating recommendation explanations, we provide three benchmark datasets for EXplanaTion RAnking (denoted as EXTRA), on which explainability can be measured by ranking-oriented metrics.
Attack-agnostic Adversarial Detection on Medical Data Using Explainable Machine Learning
On the MIMIC-III and Henan-Renmin EHR datasets, we report a detection accuracy of 77% against the Longitudinal Adversarial Attack.
Improved CNN-based Learning of Interpolation Filters for Low-Complexity Inter Prediction in Video Coding
The approach requires a single neural network to be trained from which a full quarter-pixel interpolation filter set is derived, as the network is easily interpretable due to its linear structure.
When and How to Fool Explainable Models (and Humans) with Adversarial Examples
Reliable deployment of machine learning models such as neural networks continues to be challenging due to several limitations.
A Framework for Learning Ante-hoc Explainable Models via Concepts
To the best of our knowledge, we are the first ante-hoc explanation generation method to show results with a large-scale dataset such as ImageNet.
Global and Local Interpretation of black-box Machine Learning models to determine prognostic factors from early COVID-19 data
We explore one of the most recent techniques called symbolic metamodeling to find the mathematical expression of the machine learning models for COVID-19.
Consistent Explanations by Contrastive Learning
We show that our method, Contrastive Grad-CAM Consistency (CGC), results in Grad-CAM interpretation heatmaps that are more consistent with human annotations while still achieving comparable classification accuracy.