Search Results for author: Richard Dazeley

Found 22 papers, 3 papers with code

Masked Autoencoders in 3D Point Cloud Representation Learning

1 code implementation4 Jul 2022 Jincen Jiang, Xuequan Lu, Lizhi Zhao, Richard Dazeley, Meili Wang

We first split the input point cloud into patches and mask a portion of them, then use our Patch Embedding Module to extract the features of unmasked patches.

Point Cloud Completion Point cloud reconstruction +2

A Survey of Multi-Objective Sequential Decision-Making

no code implementations4 Feb 2014 Diederik Marijn Roijers, Peter Vamplew, Shimon Whiteson, Richard Dazeley

Using this taxonomy, we survey the literature on multi-objective methods for planning and learning.

Decision Making

A Demonstration of Issues with Value-Based Multiobjective Reinforcement Learning Under Stochastic State Transitions

no code implementations14 Apr 2020 Peter Vamplew, Cameron Foale, Richard Dazeley

We report a previously unidentified issue with model-free, value-based approaches to multiobjective reinforcement learning in the context of environments with stochastic state transitions.

reinforcement-learning Reinforcement Learning (RL)

Discrete-to-Deep Supervised Policy Learning

1 code implementation5 May 2020 Budi Kurniawan, Peter Vamplew, Michael Papasimeon, Richard Dazeley, Cameron Foale

It then selects from each discrete state an input value and the action with the highest numerical preference as an input/target pair.

Reinforcement Learning (RL)

Explainable robotic systems: Understanding goal-driven actions in a reinforcement learning scenario

no code implementations24 Jun 2020 Francisco Cruz, Richard Dazeley, Peter Vamplew, Ithan Moreira

As a way to explain the goal-driven robot's actions, we use the probability of success computed by three different proposed approaches: memory-based, learning-based, and introspection-based.

Action Understanding Decision Making +2

A Conceptual Framework for Externally-influenced Agents: An Assisted Reinforcement Learning Review

no code implementations3 Jul 2020 Adam Bignold, Francisco Cruz, Matthew E. Taylor, Tim Brys, Richard Dazeley, Peter Vamplew, Cameron Foale

In this work, while reviewing externally-influenced methods, we propose a conceptual framework and taxonomy for assisted reinforcement learning, aimed at fostering collaboration by classifying and comparing various methods that use external information in the learning process.

Decision Making reinforcement-learning +2

Deep Reinforcement Learning with Interactive Feedback in a Human-Robot Environment

no code implementations7 Jul 2020 Ithan Moreira, Javier Rivas, Francisco Cruz, Richard Dazeley, Angel Ayala, Bruno Fernandes

We compare three different learning methods using a simulated robotic arm for the task of organizing different objects; the proposed methods are (i) deep reinforcement learning (DeepRL); (ii) interactive deep reinforcement learning using a previously trained artificial agent as an advisor (agent-IDeepRL); and (iii) interactive deep reinforcement learning using a human advisor (human-IDeepRL).

reinforcement-learning Reinforcement Learning (RL)

Persistent Rule-based Interactive Reinforcement Learning

no code implementations4 Feb 2021 Adam Bignold, Francisco Cruz, Richard Dazeley, Peter Vamplew, Cameron Foale

Interactive reinforcement learning has allowed speeding up the learning process in autonomous agents by including a human trainer providing extra information to the agent in real-time.

reinforcement-learning Reinforcement Learning (RL)

Levels of explainable artificial intelligence for human-aligned conversational explanations

no code implementations7 Jul 2021 Richard Dazeley, Peter Vamplew, Cameron Foale, Charlotte Young, Sunil Aryal, Francisco Cruz

Over the last few years there has been rapid research growth into eXplainable Artificial Intelligence (XAI) and the closely aligned Interpretable Machine Learning (IML).

Decision Making Explainable artificial intelligence +2

Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework and Survey

no code implementations20 Aug 2021 Richard Dazeley, Peter Vamplew, Francisco Cruz

EXplainable RL (XRL) is relatively recent field of research that aims to develop techniques to extract concepts from the agent's: perception of the environment; intrinsic/extrinsic motivations/beliefs; Q-values, goals and objectives.

Decision Making Explainable artificial intelligence +3

Explainable Deep Reinforcement Learning Using Introspection in a Non-episodic Task

no code implementations18 Aug 2021 Angel Ayala, Francisco Cruz, Bruno Fernandes, Richard Dazeley

Explainable reinforcement learning allows artificial agents to explain their behavior in a human-like manner aiming at non-expert end-users.

Decision Making reinforcement-learning +1

Evaluating Human-like Explanations for Robot Actions in Reinforcement Learning Scenarios

no code implementations7 Jul 2022 Francisco Cruz, Charlotte Young, Richard Dazeley, Peter Vamplew

In this work, we make use of human-like explanations built from the probability of success to complete the goal that an autonomous robot shows after performing an action.

counterfactual Decision Making +3

Broad-persistent Advice for Interactive Reinforcement Learning Scenarios

no code implementations11 Oct 2022 Francisco Cruz, Adam Bignold, Hung Son Nguyen, Richard Dazeley, Peter Vamplew

The use of interactive advice in reinforcement learning scenarios allows for speeding up the learning process for autonomous agents.

reinforcement-learning Reinforcement Learning (RL)

Weighted Point Cloud Normal Estimation

no code implementations6 May 2023 Weijia Wang, Xuequan Lu, Di Shao, Xiao Liu, Richard Dazeley, Antonio Robles-Kelly, Wei Pan

Existing normal estimation methods for point clouds are often less robust to severe noise and complex geometric structures.

Contrastive Learning regression

An Empirical Investigation of Value-Based Multi-objective Reinforcement Learning for Stochastic Environments

no code implementations6 Jan 2024 Kewen Ding, Peter Vamplew, Cameron Foale, Richard Dazeley

One common approach to solve multi-objective reinforcement learning (MORL) problems is to extend conventional Q-learning by using vector Q-values in combination with a utility function.

Multi-Objective Reinforcement Learning Q-Learning

Utility-Based Reinforcement Learning: Unifying Single-objective and Multi-objective Reinforcement Learning

no code implementations5 Feb 2024 Peter Vamplew, Cameron Foale, Conor F. Hayes, Patrick Mannion, Enda Howley, Richard Dazeley, Scott Johnson, Johan Källström, Gabriel Ramos, Roxana Rădulescu, Willem Röpke, Diederik M. Roijers

Research in multi-objective reinforcement learning (MORL) has introduced the utility-based paradigm, which makes use of both environmental rewards and a function that defines the utility derived by the user from those rewards.

Multi-Objective Reinforcement Learning reinforcement-learning

Value function interference and greedy action selection in value-based multi-objective reinforcement learning

no code implementations9 Feb 2024 Peter Vamplew, Cameron Foale, Richard Dazeley

Multi-objective reinforcement learning (MORL) algorithms extend conventional reinforcement learning (RL) to the more general case of problems with multiple, conflicting objectives, represented by vector-valued rewards.

Multi-Objective Reinforcement Learning Q-Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.