no code implementations • 29 Nov 2019 • Richard Tomsett, Dan Harborne, Supriyo Chakraborty, Prudhvi Gurram, Alun Preece
Despite a proliferation of such methods, little effort has been made to quantify how good these saliency maps are at capturing the true relevance of the pixels to the classifier output (i. e. their "fidelity").
no code implementations • 29 Sep 2018 • Alun Preece, Dan Harborne, Dave Braines, Richard Tomsett, Supriyo Chakraborty
There is general consensus that it is important for artificial intelligence (AI) and machine learning systems to be explainable and/or interpretable.
no code implementations • 20 Jun 2018 • Richard Tomsett, Dave Braines, Dan Harborne, Alun Preece, Supriyo Chakraborty
Several researchers have argued that a machine learning system's interpretability should be defined in relation to a specific agent or task: we should not ask if the system is interpretable, but to whom is it interpretable.
BIG-bench Machine Learning Interpretable Machine Learning +1