Search Results for author: Dan Harborne

Found 3 papers, 0 papers with code

Sanity Checks for Saliency Metrics

no code implementations29 Nov 2019 Richard Tomsett, Dan Harborne, Supriyo Chakraborty, Prudhvi Gurram, Alun Preece

Despite a proliferation of such methods, little effort has been made to quantify how good these saliency maps are at capturing the true relevance of the pixels to the classifier output (i. e. their "fidelity").

Stakeholders in Explainable AI

no code implementations29 Sep 2018 Alun Preece, Dan Harborne, Dave Braines, Richard Tomsett, Supriyo Chakraborty

There is general consensus that it is important for artificial intelligence (AI) and machine learning systems to be explainable and/or interpretable.

Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems

no code implementations20 Jun 2018 Richard Tomsett, Dave Braines, Dan Harborne, Alun Preece, Supriyo Chakraborty

Several researchers have argued that a machine learning system's interpretability should be defined in relation to a specific agent or task: we should not ask if the system is interpretable, but to whom is it interpretable.

BIG-bench Machine Learning Interpretable Machine Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.