Search Results for author: Alexander Hepburn

Found 9 papers, 1 papers with code

What and How of Machine Learning Transparency: Building Bespoke Explainability Tools with Interoperable Algorithmic Components

no code implementations8 Sep 2022 Kacper Sokol, Alexander Hepburn, Raul Santos-Rodriguez, Peter Flach

Explainability techniques for data-driven predictive models based on artificial intelligence and machine learning algorithms allow us to better understand the operation of such systems and help to hold them accountable.

Explanation Generation

Sampling Based On Natural Image Statistics Improves Local Surrogate Explainers

no code implementations8 Aug 2022 Ricardo Kleinlein, Alexander Hepburn, Raúl Santos-Rodríguez, Fernando Fernández-Martínez

By training a simple, more interpretable model to locally approximate the decision boundary of a non-interpretable system, we can estimate the relative importance of the input features on the prediction.

Orthonormal Convolutions for the Rotation Based Iterative Gaussianization

no code implementations8 Jun 2022 Valero Laparra, Alexander Hepburn, J. Emmanuel Johnson, Jesús Malo

Here we present the \emph{Convolutional RBIG}: an extension that alleviates this issue by imposing that the rotation in RBIG is a convolution.

Texture Synthesis

On the relation between statistical learning and perceptual distances

no code implementations ICLR 2022 Alexander Hepburn, Valero Laparra, Raul Santos-Rodriguez, Johannes Ballé, Jesús Malo

Since machine learning relies on the statistics of training data as well, the above connection has interesting implications when using perceptual distances (which mimic the behavior of the human visual system) as a loss function.

BIG-bench Machine Learning Perceptual Distance

Explainers in the Wild: Making Surrogate Explainers Robust to Distortions through Perception

no code implementations22 Feb 2021 Alexander Hepburn, Raul Santos-Rodriguez

We generate explanations for images in the Imagenet-C dataset and demonstrate how using a perceptual distances in the surrogate explainer creates more coherent explanations for the distorted and reference images.

Image Classification

bLIMEy: Surrogate Prediction Explanations Beyond LIME

1 code implementation29 Oct 2019 Kacper Sokol, Alexander Hepburn, Raul Santos-Rodriguez, Peter Flach

Surrogate explainers of black-box machine learning predictions are of paramount importance in the field of eXplainable Artificial Intelligence since they can be applied to any type of data (images, text and tabular), are model-agnostic and are post-hoc (i. e., can be retrofitted).

Explainable artificial intelligence

PerceptNet: A Human Visual System Inspired Neural Network for Estimating Perceptual Distance

no code implementations28 Oct 2019 Alexander Hepburn, Valero Laparra, Jesús Malo, Ryan McConville, Raul Santos-Rodriguez

Traditionally, the vision community has devised algorithms to estimate the distance between an original image and images that have been subject to perturbations.

Perceptual Distance

Enforcing Perceptual Consistency on Generative Adversarial Networks by Using the Normalised Laplacian Pyramid Distance

no code implementations9 Aug 2019 Alexander Hepburn, Valero Laparra, Ryan McConville, Raul Santos-Rodriguez

While an important part of the evaluation of the generated images usually involves visual inspection, the inclusion of human perception as a factor in the training process is often overlooked.

Image Segmentation Image-to-Image Translation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.