Search Results for author: Peter Flach

Found 31 papers, 13 papers with code

Shapley Sets: Feature Attribution via Recursive Function Decomposition

no code implementations4 Jul 2023 Torty Sivill, Peter Flach

Despite their ubiquitous use, Shapley value feature attributions can be misleading due to feature interaction in both model and data.

Fairness

MIDI-Draw: Sketching to Control Melody Generation

no code implementations19 May 2023 Tashi Namgyal, Peter Flach, Raul Santos-Rodriguez

We describe a proof-of-principle implementation of a system for drawing melodies that abstracts away from a note-level input representation via melodic contours.

What and How of Machine Learning Transparency: Building Bespoke Explainability Tools with Interoperable Algorithmic Components

no code implementations8 Sep 2022 Kacper Sokol, Alexander Hepburn, Raul Santos-Rodriguez, Peter Flach

Explainability techniques for data-driven predictive models based on artificial intelligence and machine learning algorithms allow us to better understand the operation of such systems and help to hold them accountable.

Explanation Generation

Simply Logical -- Intelligent Reasoning by Example (Fully Interactive Online Edition)

1 code implementation14 Aug 2022 Peter Flach, Kacper Sokol

"Simply Logical -- Intelligent Reasoning by Example" by Peter Flach was first published by John Wiley in 1994.

The Weak Supervision Landscape

no code implementations30 Mar 2022 Rafael Poyiadzi, Daniel Bacaicoa-Barber, Jesus Cid-Sueiro, Miquel Perello-Nieto, Peter Flach, Raul Santos-Rodriguez

In this paper we propose a framework for categorising weak supervision settings with the aim of: (1) helping the dataset owner or annotator navigate through the available options within weak supervision when prescribing an annotation process, and (2) describing existing annotations for a dataset to machine learning practitioners so that we allow them to understand the implications for the learning process.

BIG-bench Machine Learning Navigate

Explainability Is in the Mind of the Beholder: Establishing the Foundations of Explainable Artificial Intelligence

no code implementations29 Dec 2021 Kacper Sokol, Peter Flach

This approach allows us to define explainability as (logical) reasoning applied to transparent insights (into, possibly black-box, predictive systems) interpreted under background knowledge and placed within a specific context -- a process that engenders understanding in a selected group of explainees.

BIG-bench Machine Learning Explainable artificial intelligence +3

Risk Sensitive Model-Based Reinforcement Learning using Uncertainty Guided Planning

no code implementations9 Nov 2021 Stefan Radic Webster, Peter Flach

Identifying uncertainty and taking mitigating actions is crucial for safe and trustworthy reinforcement learning agents, especially when deployed in high-risk environments.

Model-based Reinforcement Learning reinforcement-learning +1

You Only Write Thrice: Creating Documents, Computational Notebooks and Presentations From a Single Source

1 code implementation2 Jul 2021 Kacper Sokol, Peter Flach

We offer a proof-of-concept workflow that composes Jupyter Book (an online document), Jupyter Notebook (a computational narrative) and reveal. js slides from a single markdown source file.

Management

Continual Density Ratio Estimation in an Online Setting

no code implementations9 Mar 2021 Yu Chen, Song Liu, Tom Diethe, Peter Flach

To the best of our knowledge, there is no existing method that can evaluate generative models in continual learning without storing samples from the original distribution.

Continual Learning Decision Making +1

Discriminative Representation Loss (DRL): A More Efficient Approach than Gradient Re-Projection in Continual Learning

no code implementations28 Sep 2020 Yu Chen, Tom Diethe, Peter Flach

The use of episodic memories in continual learning has been shown to be effective in terms of alleviating catastrophic forgetting.

Continual Learning Metric Learning

Interpretable Representations in Explainable AI: From Theory to Practice

1 code implementation16 Aug 2020 Kacper Sokol, Peter Flach

Interpretable representations are the backbone of many explainers that target black-box predictive systems based on artificial intelligence and machine learning algorithms.

Semi-Discriminative Representation Loss for Online Continual Learning

1 code implementation19 Jun 2020 Yu Chen, Tom Diethe, Peter Flach

The use of episodic memory in continual learning has demonstrated effectiveness for alleviating catastrophic forgetting.

Continual Learning Metric Learning

LIMEtree: Consistent and Faithful Surrogate Explanations of Multiple Classes

1 code implementation4 May 2020 Kacper Sokol, Peter Flach

Explainable machine learning provides tools to better understand predictive models and their decisions, but many such methods are limited to producing insights with respect to a single class.

counterfactual Image Classification +3

One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency

no code implementations27 Jan 2020 Kacper Sokol, Peter Flach

We address this challenge in this paper by discussing the promises of Interactive Machine Learning for improved transparency of black-box systems using the example of contrastive explanations -- a state-of-the-art approach to Interpretable Machine Learning.

BIG-bench Machine Learning counterfactual +1

Explainability Fact Sheets: A Framework for Systematic Assessment of Explainable Approaches

no code implementations11 Dec 2019 Kacper Sokol, Peter Flach

When used as a Work Sheet, our taxonomy can guide the development of new explainability approaches by aiding in their critical evaluation along the five proposed dimensions.

Explainable artificial intelligence

bLIMEy: Surrogate Prediction Explanations Beyond LIME

1 code implementation29 Oct 2019 Kacper Sokol, Alexander Hepburn, Raul Santos-Rodriguez, Peter Flach

Surrogate explainers of black-box machine learning predictions are of paramount importance in the field of eXplainable Artificial Intelligence since they can be applied to any type of data (images, text and tabular), are model-agnostic and are post-hoc (i. e., can be retrofitted).

Explainable artificial intelligence

Continual Density Ratio Estimation (CDRE): A new method for evaluating generative models in continual learning

no code implementations25 Sep 2019 Yu Chen, Song Liu, Tom Diethe, Peter Flach

We propose a new method Continual Density Ratio Estimation (CDRE), which can estimate density ratios between a target distribution of real samples and a distribution of samples generated by a model while the model is changing over time and the data of the target distribution is not available after a certain time point.

Continual Learning Density Ratio Estimation

FACE: Feasible and Actionable Counterfactual Explanations

1 code implementation20 Sep 2019 Rafael Poyiadzi, Kacper Sokol, Raul Santos-Rodriguez, Tijl De Bie, Peter Flach

First, a counterfactual example generated by the state-of-the-art systems is not necessarily representative of the underlying data distribution, and may therefore prescribe unachievable goals(e. g., an unsuccessful life insurance applicant with severe disability may be advised to do more sports).

counterfactual

FAT Forensics: A Python Toolbox for Algorithmic Fairness, Accountability and Transparency

3 code implementations11 Sep 2019 Kacper Sokol, Raul Santos-Rodriguez, Peter Flach

Today, artificial intelligence systems driven by machine learning algorithms can be in a position to take important, and sometimes legally binding, decisions about our everyday lives.

BIG-bench Machine Learning Fairness +1

HyperStream: a Workflow Engine for Streaming Data

1 code implementation7 Aug 2019 Tom Diethe, Meelis Kull, Niall Twomey, Kacper Sokol, Hao Song, Miquel Perello-Nieto, Emma Tonkin, Peter Flach

This paper describes HyperStream, a large-scale, flexible and robust software package, written in the Python language, for processing streaming data with workflow creation capabilities.

BIG-bench Machine Learning

Distribution Calibration for Regression

no code implementations15 May 2019 Hao Song, Tom Diethe, Meelis Kull, Peter Flach

We are concerned with obtaining well-calibrated output distributions from regression models.

Gaussian Processes regression

$β^3$-IRT: A New Item Response Model and its Applications

1 code implementation10 Mar 2019 Yu Chen, Telmo Silva Filho, Ricardo B. C. Prudêncio, Tom Diethe, Peter Flach

Item Response Theory (IRT) aims to assess latent abilities of respondents based on the correctness of their answers in aptitude test items with different difficulty levels.

Non-Parametric Calibration of Probabilistic Regression

no code implementations20 Jun 2018 Hao Song, Meelis Kull, Peter Flach

The task of calibration is to retrospectively adjust the outputs from a machine learning model to provide better probability estimates on the target variable.

General Classification regression

Probabilistic Sensor Fusion for Ambient Assisted Living

no code implementations4 Feb 2017 Tom Diethe, Niall Twomey, Meelis Kull, Peter Flach, Ian Craddock

There is a widely-accepted need to revise current forms of health-care provision, with particular interest in sensing systems in the home.

Activity Recognition Sensor Fusion

Precision-Recall-Gain Curves: PR Analysis Done Right

1 code implementation NeurIPS 2015 Peter Flach, Meelis Kull

Precision-Recall analysis abounds in applications of binary classification where true negatives do not add value and hence should not affect assessment of the classifier's performance.

Binary Classification Model Selection

Cannot find the paper you are looking for? You can Submit a new open access paper.