Search Results for author: Richard Tomsett

Found 6 papers, 0 papers with code

Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted Decision-making

no code implementations15 Oct 2020 Charvi Rastogi, Yunfeng Zhang, Dennis Wei, Kush R. Varshney, Amit Dhurandhar, Richard Tomsett

We, then, conduct a second user experiment which shows that our time allocation strategy with explanation can effectively de-anchor the human and improve collaborative performance when the AI model has low confidence and is incorrect.

Decision Making

Explaining Motion Relevance for Activity Recognition in Video Deep Learning Models

no code implementations31 Mar 2020 Liam Hiley, Alun Preece, Yulia Hicks, Supriyo Chakraborty, Prudhvi Gurram, Richard Tomsett

Our results show that the selective relevance method can not only provide insight on the role played by motion in the model's decision -- in effect, revealing and quantifying the model's spatial bias -- but the method also simplifies the resulting explanations for human consumption.

Activity Recognition

Sanity Checks for Saliency Metrics

no code implementations29 Nov 2019 Richard Tomsett, Dan Harborne, Supriyo Chakraborty, Prudhvi Gurram, Alun Preece

Despite a proliferation of such methods, little effort has been made to quantify how good these saliency maps are at capturing the true relevance of the pixels to the classifier output (i. e. their "fidelity").

Illuminated Decision Trees with Lucid

no code implementations3 Sep 2019 David Mott, Richard Tomsett

The Lucid methods described by Olah et al. (2018) provide a way to inspect the inner workings of neural networks trained on image classification tasks using feature visualization.

Image Classification

Stakeholders in Explainable AI

no code implementations29 Sep 2018 Alun Preece, Dan Harborne, Dave Braines, Richard Tomsett, Supriyo Chakraborty

There is general consensus that it is important for artificial intelligence (AI) and machine learning systems to be explainable and/or interpretable.

Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems

no code implementations20 Jun 2018 Richard Tomsett, Dave Braines, Dan Harborne, Alun Preece, Supriyo Chakraborty

Several researchers have argued that a machine learning system's interpretability should be defined in relation to a specific agent or task: we should not ask if the system is interpretable, but to whom is it interpretable.

BIG-bench Machine Learning Interpretable Machine Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.