Search Results for author: Arnaud Van Looveren

Found 9 papers, 6 papers with code

Towards Practicable Sequential Shift Detectors

no code implementations27 Jul 2023 Oliver Cobb, Arnaud Van Looveren

There is a growing awareness of the harmful effects of distribution shift on the performance of deployed machine learning models.

Context-Aware Drift Detection

1 code implementation16 Mar 2022 Oliver Cobb, Arnaud Van Looveren

They are used to test for evidence that the distribution underlying recent deployment data differs from that underlying the historical reference data.

Sequential Multivariate Change Detection with Calibrated and Memoryless False Detection Rates

1 code implementation2 Aug 2021 Oliver Cobb, Arnaud Van Looveren, Janis Klaise

Responding appropriately to the detections of a sequential change detector requires knowledge of the rate at which false positives occur in the absence of change.

Change Detection

Model-agnostic and Scalable Counterfactual Explanations via Reinforcement Learning

1 code implementation4 Jun 2021 Robert-Florian Samoilescu, Arnaud Van Looveren, Janis Klaise

Counterfactual instances are a powerful tool to obtain valuable insights into automated decision processes, describing the necessary minimal changes in the input space to alter the prediction towards a desired target.

counterfactual reinforcement-learning +1

Conditional Generative Models for Counterfactual Explanations

no code implementations25 Jan 2021 Arnaud Van Looveren, Janis Klaise, Giovanni Vacanti, Oliver Cobb

Counterfactual instances offer human-interpretable insight into the local behaviour of machine learning models.

counterfactual Time Series +1

Adversarial Detection and Correction by Matching Prediction Distributions

1 code implementation21 Feb 2020 Giovanni Vacanti, Arnaud Van Looveren

We present a novel adversarial detection and correction method for machine learning classifiers. The detector consists of an autoencoder trained with a custom loss function based on the Kullback-Leibler divergence between the classifier predictions on the original and reconstructed instances. The method is unsupervised, easy to train and does not require any knowledge about the underlying attack.

Interpretable Counterfactual Explanations Guided by Prototypes

1 code implementation3 Jul 2019 Arnaud Van Looveren, Janis Klaise

We propose a fast, model agnostic method for finding interpretable counterfactual explanations of classifier predictions by using class prototypes.

counterfactual

Cannot find the paper you are looking for? You can Submit a new open access paper.