no code implementations • 3 Oct 2024 • Yueqing Xuan, Kacper Sokol, Mark Sanderson, Jeffrey Chan
Algorithmic recourse provides actions to individuals who have been adversely affected by automated decision-making and helps them achieve a desired outcome.
no code implementations • 19 Sep 2024 • Aurora Spagnol, Kacper Sokol, Pietro Barbiero, Marc Langheinrich, Martin Gjoreski
While many explainable artificial intelligence techniques exist for supervised machine learning, unsupervised learning -- and clustering in particular -- has been largely neglected.
no code implementations • 19 Mar 2024 • Kacper Sokol, Julia E. Vogt
Despite significant progress, evaluation of explainable artificial intelligence remains elusive and challenging.
no code implementations • 8 Sep 2023 • Edward A. Small, Jeffrey N. Clark, Christopher J. McWilliams, Kacper Sokol, Jeffrey Chan, Flora D. Salim, Raul Santos-Rodriguez
Counterfactuals operationalised through algorithmic recourse have become a powerful tool to make artificial intelligence systems explainable.
1 code implementation • 5 Jun 2023 • Kacper Sokol, Edward Small, Yueqing Xuan
Counterfactual explanations are the de facto standard when tasked with interpreting decisions of (opaque) predictive models.
no code implementations • 4 Jun 2023 • Kacper Sokol, Julia E. Vogt
Ante-hoc interpretability has become the holy grail of explainable artificial intelligence for high-stakes domains such as healthcare; however, this notion is elusive, lacks a widely-accepted definition and depends on the operational context.
1 code implementation • 19 Apr 2023 • Edward A. Small, Kacper Sokol, Daniel Manning, Flora D. Salim, Jeffrey Chan
Group fairness is achieved by equalising prediction distributions between protected sub-populations; individual fairness requires treating similar individuals alike.
no code implementations • 15 Apr 2023 • Yueqing Xuan, Kacper Sokol, Mark Sanderson, Jeffrey Chan
Since positive data is disproportionately contributed by a minority of active users, negative samplers might be affected by data imbalance thus choosing more informative negative items for active users.
no code implementations • 2 Mar 2023 • Edward Small, Yueqing Xuan, Danula Hettiachchi, Kacper Sokol
Explainable artificial intelligence techniques are developed at breakneck speed, but suitable evaluation approaches lag behind.
no code implementations • 7 Feb 2023 • Bernard Keenan, Kacper Sokol
Over the past decade explainable artificial intelligence has evolved from a predominantly technical discipline into a field that is deeply intertwined with social sciences.
no code implementations • 8 Sep 2022 • Kacper Sokol, Alexander Hepburn, Rafael Poyiadzi, Matthew Clifford, Raul Santos-Rodriguez, Peter Flach
Predictive systems, in particular machine learning algorithms, can take important, and sometimes legally binding, decisions about our everyday life.
no code implementations • 8 Sep 2022 • Kacper Sokol, Alexander Hepburn, Raul Santos-Rodriguez, Peter Flach
Explainability techniques for data-driven predictive models based on artificial intelligence and machine learning algorithms allow us to better understand the operation of such systems and help to hold them accountable.
1 code implementation • 14 Aug 2022 • Peter Flach, Kacper Sokol
"Simply Logical -- Intelligent Reasoning by Example" by Peter Flach was first published by John Wiley in 1994.
1 code implementation • 11 Jul 2022 • Edward Small, Wei Shao, Zeliang Zhang, Peihan Liu, Jeffrey Chan, Kacper Sokol, Flora Salim
Recent studies have shown that robustness (the ability for a model to perform well on unseen data) plays a significant role in the type of strategy that should be used when approaching a new problem and, hence, measuring the robustness of these strategies has become a fundamental problem.
2 code implementations • 14 Mar 2022 • Kacper Sokol, Meelis Kull, Jeffrey Chan, Flora Salim
While data-driven predictive models are a strictly technological construct, they may operate within a social context in which benign engineering choices entail implicit, indirect and unexpected real-life consequences.
no code implementations • 29 Dec 2021 • Kacper Sokol, Peter Flach
This approach allows us to define explainability as (logical) reasoning applied to transparent insights (into, possibly black-box, predictive systems) interpreted under background knowledge and placed within a specific context -- a process that engenders understanding in a selected group of explainees.
BIG-bench Machine Learning Explainable artificial intelligence +3
1 code implementation • 2 Jul 2021 • Kacper Sokol, Peter Flach
We offer a proof-of-concept workflow that composes Jupyter Book (an online document), Jupyter Notebook (a computational narrative) and reveal. js slides from a single markdown source file.
1 code implementation • 16 Aug 2020 • Kacper Sokol, Peter Flach
Interpretable representations are the backbone of many explainers that target black-box predictive systems based on artificial intelligence and machine learning algorithms.
1 code implementation • 4 May 2020 • Kacper Sokol, Peter Flach
Explainable artificial intelligence provides tools to better understand predictive models and their decisions, but many such methods are limited to producing insights with respect to a single class.
no code implementations • 27 Jan 2020 • Kacper Sokol, Peter Flach
We address this challenge in this paper by discussing the promises of Interactive Machine Learning for improved transparency of black-box systems using the example of contrastive explanations -- a state-of-the-art approach to Interpretable Machine Learning.
no code implementations • 11 Dec 2019 • Kacper Sokol, Peter Flach
When used as a Work Sheet, our taxonomy can guide the development of new explainability approaches by aiding in their critical evaluation along the five proposed dimensions.
1 code implementation • 29 Oct 2019 • Kacper Sokol, Alexander Hepburn, Raul Santos-Rodriguez, Peter Flach
Surrogate explainers of black-box machine learning predictions are of paramount importance in the field of eXplainable Artificial Intelligence since they can be applied to any type of data (images, text and tabular), are model-agnostic and are post-hoc (i. e., can be retrofitted).
1 code implementation • 20 Sep 2019 • Rafael Poyiadzi, Kacper Sokol, Raul Santos-Rodriguez, Tijl De Bie, Peter Flach
First, a counterfactual example generated by the state-of-the-art systems is not necessarily representative of the underlying data distribution, and may therefore prescribe unachievable goals(e. g., an unsuccessful life insurance applicant with severe disability may be advised to do more sports).
3 code implementations • 11 Sep 2019 • Kacper Sokol, Raul Santos-Rodriguez, Peter Flach
Today, artificial intelligence systems driven by machine learning algorithms can be in a position to take important, and sometimes legally binding, decisions about our everyday lives.
1 code implementation • 7 Aug 2019 • Tom Diethe, Meelis Kull, Niall Twomey, Kacper Sokol, Hao Song, Miquel Perello-Nieto, Emma Tonkin, Peter Flach
This paper describes HyperStream, a large-scale, flexible and robust software package, written in the Python language, for processing streaming data with workflow creation capabilities.