1 code implementation • 29 Oct 2019 • Kacper Sokol, Alexander Hepburn, Raul Santos-Rodriguez, Peter Flach
Surrogate explainers of black-box machine learning predictions are of paramount importance in the field of eXplainable Artificial Intelligence since they can be applied to any type of data (images, text and tabular), are model-agnostic and are post-hoc (i. e., can be retrofitted).
no code implementations • 9 Aug 2019 • Alexander Hepburn, Valero Laparra, Ryan McConville, Raul Santos-Rodriguez
While an important part of the evaluation of the generated images usually involves visual inspection, the inclusion of human perception as a factor in the training process is often overlooked.
no code implementations • 28 Oct 2019 • Alexander Hepburn, Valero Laparra, Jesús Malo, Ryan McConville, Raul Santos-Rodriguez
Traditionally, the vision community has devised algorithms to estimate the distance between an original image and images that have been subject to perturbations.
no code implementations • 22 Feb 2021 • Alexander Hepburn, Raul Santos-Rodriguez
We generate explanations for images in the Imagenet-C dataset and demonstrate how using a perceptual distances in the surrogate explainer creates more coherent explanations for the distorted and reference images.
no code implementations • ICLR 2022 • Alexander Hepburn, Valero Laparra, Raul Santos-Rodriguez, Johannes Ballé, Jesús Malo
Since machine learning relies on the statistics of training data as well, the above connection has interesting implications when using perceptual distances (which mimic the behavior of the human visual system) as a loss function.
no code implementations • 8 Jun 2022 • Valero Laparra, Alexander Hepburn, J. Emmanuel Johnson, Jesús Malo
Here we present the \emph{Convolutional RBIG}: an extension that alleviates this issue by imposing that the rotation in RBIG is a convolution.
no code implementations • 8 Aug 2022 • Ricardo Kleinlein, Alexander Hepburn, Raúl Santos-Rodríguez, Fernando Fernández-Martínez
By training a simple, more interpretable model to locally approximate the decision boundary of a non-interpretable system, we can estimate the relative importance of the input features on the prediction.
no code implementations • 8 Sep 2022 • Kacper Sokol, Alexander Hepburn, Raul Santos-Rodriguez, Peter Flach
Explainability techniques for data-driven predictive models based on artificial intelligence and machine learning algorithms allow us to better understand the operation of such systems and help to hold them accountable.
no code implementations • 8 Sep 2022 • Kacper Sokol, Alexander Hepburn, Rafael Poyiadzi, Matthew Clifford, Raul Santos-Rodriguez, Peter Flach
Predictive systems, in particular machine learning algorithms, can take important, and sometimes legally binding, decisions about our everyday life.
no code implementations • 19 Jan 2023 • Enrico Werner, Jeffrey N. Clark, Ranjeet S. Bhamber, Michael Ambler, Christopher P. Bourdeaux, Alexander Hepburn, Christopher J. McWilliams, Raul Santos-Rodriguez
We present a pipeline in which unsupervised machine learning techniques are used to automatically identify subtypes of hospital patients admitted between 2017 and 2021 in a large UK teaching hospital.
no code implementations • 17 Mar 2023 • Alexander Hepburn, Valero Laparra, Raúl Santos-Rodriguez, Jesús Malo
Moreover, the direct evaluation of the hypothesis was limited by the inability of the classical image models to deliver accurate estimates of the probability.
no code implementations • 19 May 2023 • Tashi Namgyal, Alexander Hepburn, Raul Santos-Rodriguez, Valero Laparra, Jesus Malo
In this study, we investigate the feasibility of utilizing state-of-the-art image perceptual metrics for evaluating audio signals by representing them as spectrograms.
no code implementations • 6 Dec 2023 • Tashi Namgyal, Alexander Hepburn, Raul Santos-Rodriguez, Valero Laparra, Jesus Malo
Perceptual metrics are traditionally used to evaluate the quality of natural signals, such as images and audio.
no code implementations • 15 Mar 2024 • Alexander Hepburn, Raul Santos-Rodriguez, Javier Portilla
The two-alternative forced choice (2AFC) experimental setup is popular in the visual perception literature, where practitioners aim to understand how human observers perceive distances within triplets that consist of a reference image and two distorted versions of that image.
1 code implementation • 28 Mar 2024 • Jonathan Erskine, Matt Clifford, Alexander Hepburn, Raúl Santos-Rodríguez
Human-Computer Interaction has been shown to lead to improvements in machine learning systems by boosting model performance, accelerating learning and building user confidence.