no code implementations • 16 May 2023 • Lars Henry Berge Olsen, Ingrid Kristine Glad, Martin Jullum, Kjersti Aas
The method classes use either Monte Carlo integration or regression to model the conditional expectations.
no code implementations • 26 Oct 2022 • Rogelio A. Mancisidor, Kjersti Aas
To solve this limitation, this research introduces the Conditional Multimodal Discriminative (CMMD) model that learns multimodal representations that embed information from accounting, market, and textual data modalities.
1 code implementation • 26 Nov 2021 • Lars Henry Berge Olsen, Ingrid Kristine Glad, Martin Jullum, Kjersti Aas
Shapley values are today extensively used as a model-agnostic explanation framework to explain complex predictive machine learning models.
1 code implementation • 18 Nov 2021 • Annabelle Redelmeier, Martin Jullum, Kjersti Aas, Anders Løland
We introduce MCCE: Monte Carlo sampling of valid and realistic Counterfactual Explanations for tabular data, a novel counterfactual explanation method that generates on-manifold, actionable and valid counterfactuals by modeling the joint distribution of the mutable features given the immutable features and the decision.
1 code implementation • 9 Oct 2021 • Rogelio A. Mancisidor, Michael Kampffmeyer, Kjersti Aas, Robert Jenssen
Deep generative models with latent variables have been used lately to learn joint representations and generative processes from multi-modal data.
no code implementations • 23 Jun 2021 • Martin Jullum, Annabelle Redelmeier, Kjersti Aas
The main drawback with Shapley values, however, is that its computational complexity grows exponentially in the number of input features, making it unfeasible in many real world situations where there could be hundreds or thousands of features.
no code implementations • 12 Feb 2021 • Kjersti Aas, Thomas Nagler, Martin Jullum, Anders Løland
In this paper we propose two new approaches for modelling the dependence between the features.
no code implementations • 2 Jul 2020 • Annabelle Redelmeier, Martin Jullum, Kjersti Aas
It is becoming increasingly important to explain complex, black-box machine learning models.
1 code implementation • 12 Apr 2019 • Rogelio A. Mancisidor, Michael Kampffmeyer, Kjersti Aas, Robert Jenssen
Reject inference is the process of attempting to infer the creditworthiness status of the rejected applications.
1 code implementation • 25 Mar 2019 • Kjersti Aas, Martin Jullum, Anders Løland
In this paper, we extend the Kernel SHAP method to handle dependent features.
no code implementations • 14 Mar 2019 • Rogelio A. Mancisidor, Michael Kampffmeyer, Kjersti Aas, Robert Jenssen
We show that it is possible to steer the latent representations in the latent space of the VAE using the Weight of Evidence and forming a specific grouping of the data that reflects the customers' creditworthiness.
no code implementations • 7 Jun 2018 • Rogelio Andrade Mancisidor, Michael Kampffmeyer, Kjersti Aas, Robert Jenssen
We use the VAE and show that transforming the input data into a meaningful representation, it is possible to steer configurations in the latent space of the VAE.