Search Results for author: Tim Miller

Found 32 papers, 4 papers with code

A Closer Look at Generalisation in RAVEN

1 code implementation ECCV 2020 Steven Spratley, Krista Ehinger, Tim Miller

Humans have a remarkable capacity to draw parallels between concepts, generalising their experience to new domains.

Visual Reasoning

Towards the new XAI: A Hypothesis-Driven Approach to Decision Support Using Evidence

no code implementations2 Feb 2024 Thao Le, Tim Miller, Ronal Singh, Liz Sonenberg

Prior research on AI-assisted human decision-making has explored several different explainable AI (XAI) approaches.

Decision Making

Diverse, Top-k, and Top-Quality Planning Over Simulators

no code implementations25 Aug 2023 Lyndon Benke, Tim Miller, Michael Papasimeon, Nir Lipovetzky

Diverse, top-k, and top-quality planning are concerned with the generation of sets of solutions to sequential decision problems.

Deceptive Reinforcement Learning in Model-Free Domains

no code implementations20 Mar 2023 Alan Lewis, Tim Miller

We propose the deceptive exploration ambiguity model (DEAM), which learns using the deceptive policy during training, leading to targeted exploration of the state space.

reinforcement-learning Reinforcement Learning (RL)

Explaining Model Confidence Using Counterfactuals

no code implementations10 Mar 2023 Thao Le, Tim Miller, Ronal Singh, Liz Sonenberg

In this paper, we show that counterfactual explanations of confidence scores help study participants to better understand and better trust a machine learning model's prediction.

counterfactual Counterfactual Explanation

Explainable Goal Recognition: A Framework Based on Weight of Evidence

no code implementations9 Mar 2023 Abeer Alshehri, Tim Miller, Mor Vered

We introduce and evaluate an eXplainable Goal Recognition (XGR) model that uses the Weight of Evidence (WoE) framework to explain goal recognition problems.

Explainable AI is Dead, Long Live Explainable AI! Hypothesis-driven decision support

no code implementations24 Feb 2023 Tim Miller

In this paper, we argue for a paradigm shift from the current model of explainable artificial intelligence (XAI), which may be counter-productive to better human decision making.

Decision Making Explainable artificial intelligence +1

Unicode Analogies: An Anti-Objectivist Visual Reasoning Challenge

1 code implementation CVPR 2023 Steven Spratley, Krista A. Ehinger, Tim Miller

While progressive-matrix problems (PMPs) are becoming popular for the development and evaluation of analogical reasoning in computer vision, we argue that the dominant methodology in this area struggles to expose the lack of meaningful generalisation in solvers, and reinforces an objectivist stance on perception -- that objects can only be seen one way -- which we believe to be counter-productive.

Navigate Visual Reasoning

Concept-based Explanations using Non-negative Concept Activation Vectors and Decision Tree for CNN Models

no code implementations19 Nov 2022 Gayda Mutahar, Tim Miller

This work features how important to have more understandable explanations when interpretability is crucial.

Improving Model Understanding and Trust with Counterfactual Explanations of Model Confidence

no code implementations6 Jun 2022 Thao Le, Tim Miller, Ronal Singh, Liz Sonenberg

In this paper, we show that counterfactual explanations of confidence scores help users better understand and better trust an AI model's prediction in human-subject studies.

counterfactual Counterfactual Explanation

Efficient Multi-agent Epistemic Planning: Teaching Planners About Nested Belief

no code implementations6 Oct 2021 Christian Muise, Vaishak Belle, Paolo Felli, Sheila Mcilraith, Tim Miller, Adrian R. Pearce, Liz Sonenberg

Many AI applications involve the interaction of multiple autonomous agents, requiring those agents to reason about their own beliefs, as well as those of other agents.

Collaborative Human-Agent Planning for Resilience

no code implementations29 Apr 2021 Ronal Singh, Tim Miller, Darryn Reid

Results show that participants' constraints improved the expected return of the plans by 10% ($p < 0. 05$) relative to baseline plans, demonstrating that human insight can be used in collaborative planning for resilience.

Autonomous Vehicles

LEx: A Framework for Operationalising Layers of Machine Learning Explanations

no code implementations15 Apr 2021 Ronal Singh, Upol Ehsan, Marc Cheong, Mark O. Riedl, Tim Miller

Several social factors impact how people respond to AI explanations used to justify AI decisions affecting them personally.

BIG-bench Machine Learning Position

Deceptive Reinforcement Learning for Privacy-Preserving Planning

no code implementations5 Feb 2021 Zhengshang Liu, Yue Yang, Tim Miller, Peta Masters

However, in some situations, we may want to keep a reward function private; that is, to make it difficult for an observer to determine the reward function used.

Privacy Preserving reinforcement-learning +1

Directive Explanations for Actionable Explainability in Machine Learning Applications

no code implementations3 Feb 2021 Ronal Singh, Paul Dourish, Piers Howe, Tim Miller, Liz Sonenberg, Eduardo Velloso, Frank Vetere

This paper investigates the prospects of using directive explanations to assist people in achieving recourse of machine learning decisions.

BIG-bench Machine Learning counterfactual

Good proctor or "Big Brother"? AI Ethics and Online Exam Supervision Technologies

no code implementations15 Nov 2020 Simon Coghlan, Tim Miller, Jeannie Paterson

This article philosophically analyzes online exam supervision technologies, which have been thrust into the public spotlight due to campus lockdowns during the COVID-19 pandemic and the growing demand for online courses.

Ethics Fairness

Invertible Concept-based Explanations for CNN Models with Non-negative Concept Activation Vectors

1 code implementation27 Jun 2020 Ruihan Zhang, Prashan Madumal, Tim Miller, Krista A. Ehinger, Benjamin I. P. Rubinstein

Based on the requirements of fidelity (approximate models to target models) and interpretability (being meaningful to people), we design measurements and evaluate a range of matrix factorization methods with our framework.

Clustering Dimensionality Reduction +1

Federated pretraining and fine tuning of BERT using clinical notes from multiple silos

no code implementations20 Feb 2020 Dianbo Liu, Tim Miller

Large scale contextual representation models, such as BERT, have significantly advanced natural language processing (NLP) in recently years.

Distal Explanations for Model-free Explainable Reinforcement Learning

no code implementations28 Jan 2020 Prashan Madumal, Tim Miller, Liz Sonenberg, Frank Vetere

In this paper we introduce and evaluate a distal explanation model for model-free reinforcement learning agents that can generate explanations for `why' and `why not' questions.

reinforcement-learning Reinforcement Learning (RL)

Confederated Machine Learning on Horizontally and Vertically Separated Medical Data for Large-Scale Health System Intelligence

no code implementations ICLR 2020 Dianbo Liu, Kathe Fox, Griffin Weber, Tim Miller

We proposed and evaluated a confederated learning to training machine learning model to stratify the risk of several diseases among when data are horizontally separated by individual, vertically separated by data type, and separated by identity without patient ID matching.

BIG-bench Machine Learning Federated Learning

Let's Make It Personal, A Challenge in Personalizing Medical Inter-Human Communication

no code implementations29 Jul 2019 Mor Vered, Frank Dignum, Tim Miller

Current AI approaches have frequently been used to help personalize many aspects of medical experiences and tailor them to a specific individuals' needs.

Explainable Reinforcement Learning Through a Causal Lens

2 code implementations27 May 2019 Prashan Madumal, Tim Miller, Liz Sonenberg, Frank Vetere

In this paper, we use causal models to derive causal explanations of behaviour of reinforcement learning agents.

counterfactual reinforcement-learning +3

What you get is what you see: Decomposing Epistemic Planning using Functional STRIPS

no code implementations28 Mar 2019 Guang Hu, Tim Miller, Nir Lipovetzky

Epistemic planning --- planning with knowledge and belief --- is essential in many multi-agent and human-agent interaction domains.

A Grounded Interaction Protocol for Explainable Artificial Intelligence

no code implementations5 Mar 2019 Prashan Madumal, Tim Miller, Liz Sonenberg, Frank Vetere

Explainable Artificial Intelligence (XAI) systems need to include an explanation model to communicate the internal decisions, behaviours and actions to the interacting humans.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Contrastive Explanation: A Structural-Model Approach

no code implementations7 Nov 2018 Tim Miller

In this paper, we extend the structural causal model approach to define two complementary notions of contrastive explanation, and demonstrate them on two classical problems in artificial intelligence: classification and planning.

Decision Making Philosophy

Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences

no code implementations2 Dec 2017 Tim Miller, Piers Howe, Liz Sonenberg

As a result, programmers design software for themselves, rather than for their target audience, a phenomenon he refers to as the `inmates running the asylum'.

Philosophy

Explanation in Artificial Intelligence: Insights from the Social Sciences

no code implementations22 Jun 2017 Tim Miller

There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to make their algorithms more understandable.

Explainable artificial intelligence Philosophy

Social planning for social HRI

no code implementations21 Feb 2016 Liz Sonenberg, Tim Miller, Adrian Pearce, Paolo Felli, Christian Muise, Frank Dignum

Making a computational agent 'social' has implications for how it perceives itself and the environment in which it is situated, including the ability to recognise the behaviours of others.

Cannot find the paper you are looking for? You can Submit a new open access paper.