no code implementations • 27 Jul 2023 • Oliver Cobb, Arnaud Van Looveren
There is a growing awareness of the harmful effects of distribution shift on the performance of deployed machine learning models.
no code implementations • 26 Oct 2022 • Sherif Akoush, Andrei Paleyes, Arnaud Van Looveren, Clive Cox
Inference is a significant part of ML software infrastructure.
1 code implementation • 16 Mar 2022 • Oliver Cobb, Arnaud Van Looveren
They are used to test for evidence that the distribution underlying recent deployment data differs from that underlying the historical reference data.
1 code implementation • 2 Aug 2021 • Oliver Cobb, Arnaud Van Looveren, Janis Klaise
Responding appropriately to the detections of a sequential change detector requires knowledge of the rate at which false positives occur in the absence of change.
1 code implementation • 4 Jun 2021 • Robert-Florian Samoilescu, Arnaud Van Looveren, Janis Klaise
Counterfactual instances are a powerful tool to obtain valuable insights into automated decision processes, describing the necessary minimal changes in the input space to alter the prediction towards a desired target.
no code implementations • 25 Jan 2021 • Arnaud Van Looveren, Janis Klaise, Giovanni Vacanti, Oliver Cobb
Counterfactual instances offer human-interpretable insight into the local behaviour of machine learning models.
1 code implementation • 13 Jul 2020 • Janis Klaise, Arnaud Van Looveren, Clive Cox, Giovanni Vacanti, Alexandru Coca
The machine learning lifecycle extends beyond the deployment stage.
1 code implementation • 21 Feb 2020 • Giovanni Vacanti, Arnaud Van Looveren
We present a novel adversarial detection and correction method for machine learning classifiers. The detector consists of an autoencoder trained with a custom loss function based on the Kullback-Leibler divergence between the classifier predictions on the original and reconstructed instances. The method is unsupervised, easy to train and does not require any knowledge about the underlying attack.
1 code implementation • 3 Jul 2019 • Arnaud Van Looveren, Janis Klaise
We propose a fast, model agnostic method for finding interpretable counterfactual explanations of classifier predictions by using class prototypes.