no code implementations • 4 Jul 2023 • Torty Sivill, Peter Flach
Despite their ubiquitous use, Shapley value feature attributions can be misleading due to feature interaction in both model and data.
no code implementations • 19 May 2023 • Tashi Namgyal, Peter Flach, Raul Santos-Rodriguez
We describe a proof-of-principle implementation of a system for drawing melodies that abstracts away from a note-level input representation via melodic contours.
no code implementations • 6 Feb 2023 • Taku Yamagata, Emma L. Tonkin, Benjamin Arana Sanchez, Ian Craddock, Miquel Perello Nieto, Raul Santos-Rodriguez, Weisong Yang, Peter Flach
Here we propose a method to model human biases on temporal annotations and argue for the use of soft labels.
no code implementations • 8 Sep 2022 • Kacper Sokol, Alexander Hepburn, Rafael Poyiadzi, Matthew Clifford, Raul Santos-Rodriguez, Peter Flach
Predictive systems, in particular machine learning algorithms, can take important, and sometimes legally binding, decisions about our everyday life.
no code implementations • 8 Sep 2022 • Kacper Sokol, Alexander Hepburn, Raul Santos-Rodriguez, Peter Flach
Explainability techniques for data-driven predictive models based on artificial intelligence and machine learning algorithms allow us to better understand the operation of such systems and help to hold them accountable.
1 code implementation • 14 Aug 2022 • Peter Flach, Kacper Sokol
"Simply Logical -- Intelligent Reasoning by Example" by Peter Flach was first published by John Wiley in 1994.
no code implementations • 30 Mar 2022 • Rafael Poyiadzi, Daniel Bacaicoa-Barber, Jesus Cid-Sueiro, Miquel Perello-Nieto, Peter Flach, Raul Santos-Rodriguez
In this paper we propose a framework for categorising weak supervision settings with the aim of: (1) helping the dataset owner or annotator navigate through the available options within weak supervision when prescribing an annotation process, and (2) describing existing annotations for a dataset to machine learning practitioners so that we allow them to understand the implications for the learning process.
no code implementations • 29 Dec 2021 • Kacper Sokol, Peter Flach
This approach allows us to define explainability as (logical) reasoning applied to transparent insights (into, possibly black-box, predictive systems) interpreted under background knowledge and placed within a specific context -- a process that engenders understanding in a selected group of explainees.
BIG-bench Machine Learning Explainable artificial intelligence +3
no code implementations • 20 Dec 2021 • Telmo Silva Filho, Hao Song, Miquel Perello-Nieto, Raul Santos-Rodriguez, Meelis Kull, Peter Flach
This paper provides both an introduction to and a detailed overview of the principles and practice of classifier calibration.
no code implementations • 9 Nov 2021 • Stefan Radic Webster, Peter Flach
Identifying uncertainty and taking mitigating actions is crucial for safe and trustworthy reinforcement learning agents, especially when deployed in high-risk environments.
Model-based Reinforcement Learning reinforcement-learning +1
1 code implementation • 2 Jul 2021 • Kacper Sokol, Peter Flach
We offer a proof-of-concept workflow that composes Jupyter Book (an online document), Jupyter Notebook (a computational narrative) and reveal. js slides from a single markdown source file.
no code implementations • 9 Mar 2021 • Yu Chen, Song Liu, Tom Diethe, Peter Flach
To the best of our knowledge, there is no existing method that can evaluate generative models in continual learning without storing samples from the original distribution.
no code implementations • 13 Oct 2020 • Taku Yamagata, Aisling O'Kane, Amid Ayobi, Dmitri Katz, Katarzyna Stawarz, Paul Marshall, Peter Flach, Raúl Santos-Rodríguez
In this paper we investigate the use of model-based reinforcement learning to assist people with Type 1 Diabetes with insulin dose decisions.
no code implementations • 28 Sep 2020 • Yu Chen, Tom Diethe, Peter Flach
The use of episodic memories in continual learning has been shown to be effective in terms of alleviating catastrophic forgetting.
1 code implementation • 16 Aug 2020 • Kacper Sokol, Peter Flach
Interpretable representations are the backbone of many explainers that target black-box predictive systems based on artificial intelligence and machine learning algorithms.
1 code implementation • 19 Jun 2020 • Yu Chen, Tom Diethe, Peter Flach
The use of episodic memory in continual learning has demonstrated effectiveness for alleviating catastrophic forgetting.
1 code implementation • 4 May 2020 • Kacper Sokol, Peter Flach
Explainable machine learning provides tools to better understand predictive models and their decisions, but many such methods are limited to producing insights with respect to a single class.
no code implementations • 27 Jan 2020 • Kacper Sokol, Peter Flach
We address this challenge in this paper by discussing the promises of Interactive Machine Learning for improved transparency of black-box systems using the example of contrastive explanations -- a state-of-the-art approach to Interpretable Machine Learning.
no code implementations • 11 Dec 2019 • Kacper Sokol, Peter Flach
When used as a Work Sheet, our taxonomy can guide the development of new explainability approaches by aiding in their critical evaluation along the five proposed dimensions.
1 code implementation • NeurIPS 2019 • Meelis Kull, Miquel Perello Nieto, Markus Kängsepp, Telmo Silva Filho, Hao Song, Peter Flach
Class probabilities predicted by most multiclass classifiers are uncalibrated, often tending towards over-confidence.
1 code implementation • 29 Oct 2019 • Kacper Sokol, Alexander Hepburn, Raul Santos-Rodriguez, Peter Flach
Surrogate explainers of black-box machine learning predictions are of paramount importance in the field of eXplainable Artificial Intelligence since they can be applied to any type of data (images, text and tabular), are model-agnostic and are post-hoc (i. e., can be retrofitted).
3 code implementations • 28 Oct 2019 • Meelis Kull, Miquel Perello-Nieto, Markus Kängsepp, Telmo Silva Filho, Hao Song, Peter Flach
Class probabilities predicted by most multiclass classifiers are uncalibrated, often tending towards over-confidence.
no code implementations • 25 Sep 2019 • Yu Chen, Song Liu, Tom Diethe, Peter Flach
We propose a new method Continual Density Ratio Estimation (CDRE), which can estimate density ratios between a target distribution of real samples and a distribution of samples generated by a model while the model is changing over time and the data of the target distribution is not available after a certain time point.
1 code implementation • 20 Sep 2019 • Rafael Poyiadzi, Kacper Sokol, Raul Santos-Rodriguez, Tijl De Bie, Peter Flach
First, a counterfactual example generated by the state-of-the-art systems is not necessarily representative of the underlying data distribution, and may therefore prescribe unachievable goals(e. g., an unsuccessful life insurance applicant with severe disability may be advised to do more sports).
3 code implementations • 11 Sep 2019 • Kacper Sokol, Raul Santos-Rodriguez, Peter Flach
Today, artificial intelligence systems driven by machine learning algorithms can be in a position to take important, and sometimes legally binding, decisions about our everyday lives.
1 code implementation • 7 Aug 2019 • Tom Diethe, Meelis Kull, Niall Twomey, Kacper Sokol, Hao Song, Miquel Perello-Nieto, Emma Tonkin, Peter Flach
This paper describes HyperStream, a large-scale, flexible and robust software package, written in the Python language, for processing streaming data with workflow creation capabilities.
no code implementations • 15 May 2019 • Hao Song, Tom Diethe, Meelis Kull, Peter Flach
We are concerned with obtaining well-calibrated output distributions from regression models.
1 code implementation • 10 Mar 2019 • Yu Chen, Telmo Silva Filho, Ricardo B. C. Prudêncio, Tom Diethe, Peter Flach
Item Response Theory (IRT) aims to assess latent abilities of respondents based on the correctness of their answers in aptitude test items with different difficulty levels.
no code implementations • 20 Jun 2018 • Hao Song, Meelis Kull, Peter Flach
The task of calibration is to retrospectively adjust the outputs from a machine learning model to provide better probability estimates on the target variable.
no code implementations • 4 Feb 2017 • Tom Diethe, Niall Twomey, Meelis Kull, Peter Flach, Ian Craddock
There is a widely-accepted need to revise current forms of health-care provision, with particular interest in sensing systems in the home.
1 code implementation • NeurIPS 2015 • Peter Flach, Meelis Kull
Precision-Recall analysis abounds in applications of binary classification where true negatives do not add value and hence should not affect assessment of the classifier's performance.