Search Results for author: Kai Arulkumaran

Found 21 papers, 12 papers with code

Preference-Learning Emitters for Mixed-Initiative Quality-Diversity Algorithms

1 code implementation25 Oct 2022 Roberto Gallotta, Kai Arulkumaran, L. B. Soros

In mixed-initiative co-creation tasks, wherein a human and a machine jointly create items, it is important to provide multiple relevant suggestions to the designer.

Surrogate Infeasible Fitness Acquirement FI-2Pop for Procedural Content Generation

1 code implementation12 May 2022 Roberto Gallotta, Kai Arulkumaran, L. B. Soros

When generating content for video games using procedural content generation (PCG), the goal is to create functional assets of high quality.

On the link between conscious function and general intelligence in humans and machines

no code implementations24 Mar 2022 Arthur Juliani, Kai Arulkumaran, Shuntaro Sasai, Ryota Kanai

In popular media, there is often a connection drawn between the advent of awareness in artificial agents and those same agents simultaneously achieving human or superhuman level intelligence.

All You Need Is Supervised Learning: From Imitation Learning to Meta-RL With Upside Down RL

1 code implementation24 Feb 2022 Kai Arulkumaran, Dylan R. Ashley, Jürgen Schmidhuber, Rupesh K. Srivastava

Upside down reinforcement learning (UDRL) flips the conventional use of the return in the objective function in RL upside down, by taking returns as input and predicting actions.

Imitation Learning Offline RL +2

Diversity-based Trajectory and Goal Selection with Hindsight Experience Replay

1 code implementation17 Aug 2021 Tianhong Dai, Hengyan Liu, Kai Arulkumaran, Guangyu Ren, Anil Anthony Bharath

We evaluate DTGSH on five challenging robotic manipulation tasks in simulated robot environments, where we show that our method can learn more quickly and reach higher performance than other state-of-the-art approaches on all tasks.

Point Processes

A Pragmatic Look at Deep Imitation Learning

1 code implementation4 Aug 2021 Kai Arulkumaran, Dan Ogawa Lillrank

The introduction of the generative adversarial imitation learning (GAIL) algorithm has spurred the development of scalable imitation learning approaches using deep neural networks.

Behavioural cloning D4RL +1

Analysing Deep Reinforcement Learning Agents Trained with Domain Randomisation

no code implementations18 Dec 2019 Tianhong Dai, Kai Arulkumaran, Tamara Gerbert, Samyakh Tukra, Feryal Behbahani, Anil Anthony Bharath

Furthermore, even with an improved saliency method introduced in this work, we show that qualitative studies may not always correspond with quantitative measures, necessitating the combination of inspection tools in order to provide sufficient insights into the behaviour of trained agents.

reinforcement-learning Reinforcement Learning (RL)

Memory-Efficient Episodic Control Reinforcement Learning with Dynamic Online k-means

1 code implementation21 Nov 2019 Andrea Agostinelli, Kai Arulkumaran, Marta Sarrico, Pierre Richemond, Anil Anthony Bharath

Recently, neuro-inspired episodic control (EC) methods have been developed to overcome the data-inefficiency of standard deep reinforcement learning approaches.

Atari Games Clustering +3

Sample-Efficient Reinforcement Learning with Maximum Entropy Mellowmax Episodic Control

1 code implementation21 Nov 2019 Marta Sarrico, Kai Arulkumaran, Andrea Agostinelli, Pierre Richemond, Anil Anthony Bharath

Deep networks have enabled reinforcement learning to scale to more complex and challenging domains, but these methods typically require large quantities of training data.

Atari Games reinforcement-learning +1

AlphaStar: An Evolutionary Computation Perspective

1 code implementation5 Feb 2019 Kai Arulkumaran, Antoine Cully, Julian Togelius

In January 2019, DeepMind revealed AlphaStar to the world-the first artificial intelligence (AI) system to beat a professional player at the game of StarCraft II-representing a milestone in the progress of AI.

Reinforcement Learning (RL) Starcraft II

Adaptive Neural Trees

1 code implementation ICLR 2019 Ryutaro Tanno, Kai Arulkumaran, Daniel C. Alexander, Antonio Criminisi, Aditya Nori

Deep neural networks and decision trees operate on largely separate paradigms; typically, the former performs representation learning with pre-specified architectures, while the latter is characterised by learning hierarchies over pre-specified features with data-driven architectures.

General Classification Representation Learning

Variational Inference for Data-Efficient Model Learning in POMDPs

no code implementations23 May 2018 Sebastian Tschiatschek, Kai Arulkumaran, Jan Stühmer, Katja Hofmann

In this paper we propose DELIP, an approach to model learning for POMDPs that utilizes amortized structured variational inference.

Decision Making Decision Making Under Uncertainty +2

Generative Adversarial Networks: An Overview

2 code implementations19 Oct 2017 Antonia Creswell, Tom White, Vincent Dumoulin, Kai Arulkumaran, Biswa Sengupta, Anil A. Bharath

Generative adversarial networks (GANs) provide a way to learn deep representations without extensively annotated training data.

General Classification Image Generation +2

On denoising autoencoders trained to minimise binary cross-entropy

no code implementations28 Aug 2017 Antonia Creswell, Kai Arulkumaran, Anil A. Bharath

When training autoencoders on image data a natural choice of loss function is BCE, since pixel values may be normalised to take values in [0, 1] and the decoder model may be designed to generate samples that take values in (0, 1).

Decoder Denoising

A Brief Survey of Deep Reinforcement Learning

no code implementations19 Aug 2017 Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, Anil Anthony Bharath

Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world.

reinforcement-learning Reinforcement Learning (RL)

Deep Unsupervised Clustering with Gaussian Mixture Variational Autoencoders

3 code implementations8 Nov 2016 Nat Dilokthanakul, Pedro A. M. Mediano, Marta Garnelo, Matthew C. H. Lee, Hugh Salimbeni, Kai Arulkumaran, Murray Shanahan

We study a variant of the variational autoencoder model (VAE) with a Gaussian mixture as a prior distribution, with the goal of performing unsupervised clustering through deep generative models.

Clustering Human Pose Forecasting

Improving Sampling from Generative Autoencoders with Markov Chains

1 code implementation28 Oct 2016 Antonia Creswell, Kai Arulkumaran, Anil Anthony Bharath

Generative autoencoders are those which are trained to softly enforce a prior on the latent distribution learned by the inference model.

Towards Deep Symbolic Reinforcement Learning

no code implementations18 Sep 2016 Marta Garnelo, Kai Arulkumaran, Murray Shanahan

Deep reinforcement learning (DRL) brings the power of deep neural networks to bear on the generic task of trial-and-error learning, and its effectiveness has been convincingly demonstrated on tasks such as Atari video games and the game of Go.

Game of Go reinforcement-learning +2

Classifying Options for Deep Reinforcement Learning

no code implementations27 Apr 2016 Kai Arulkumaran, Nat Dilokthanakul, Murray Shanahan, Anil Anthony Bharath

In this paper we combine one method for hierarchical reinforcement learning - the options framework - with deep Q-networks (DQNs) through the use of different "option heads" on the policy network, and a supervisory network for choosing between the different options.

Hierarchical Reinforcement Learning reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.