Search Results for author: Daniel Kudenko

Found 12 papers, 3 papers with code

MAGNet: Multi-agent Graph Network for Deep Multi-agent Reinforcement Learning

no code implementations17 Dec 2020 Aleksandra Malysheva, Daniel Kudenko, Aleksei Shpilman

Over recent years, deep reinforcement learning has shown strong successes in complex single-agent tasks, and more recently this approach has also been applied to multi-agent domains.

Multi-agent Reinforcement Learning

Learning to Run with Potential-Based Reward Shaping and Demonstrations from Video Data

no code implementations16 Dec 2020 Aleksandra Malysheva, Daniel Kudenko, Aleksei Shpilman

In this paper, we demonstrate how data from videos of human running (e. g. taken from YouTube) can be used to shape the reward of the humanoid learning agent to speed up the learning and produce a better result.

A comparative evaluation of machine learning methods for robot navigation through human crowds

no code implementations16 Dec 2020 Anastasia Gaydashenko, Daniel Kudenko, Aleksei Shpilman

Robot navigation through crowds poses a difficult challenge to AI systems, since the methods should result in fast and efficient movement but at the same time are not allowed to compromise safety.

Robot Navigation

Curriculum Learning with a Progression Function

no code implementations2 Aug 2020 Andrea Bassich, Francesco Foglino, Matteo Leonetti, Daniel Kudenko

Curriculum Learning for Reinforcement Learning is an increasingly popular technique that involves training an agent on a defined sequence of intermediate tasks, called a Curriculum, to increase the agent's performance and learning speed.

Curriculum Learning

Graph-based State Representation for Deep Reinforcement Learning

1 code implementation29 Apr 2020 Vikram Waradpande, Daniel Kudenko, Megha Khosla

Motivated by the recent success of node representations for several graph analytical tasks we specifically investigate the capability of node representation learning methods to effectively encode the topology of the underlying MDP in Deep RL.

Representation Learning

Uniform State Abstraction For Reinforcement Learning

no code implementations6 Apr 2020 John Burden, Daniel Kudenko

Potential Based Reward Shaping combined with a potential function based on appropriately defined abstract knowledge has been shown to significantly improve learning speed in Reinforcement Learning.

Continuous Control

Generating Stereotypes Automatically For Complex Categorical Features

no code implementations13 Nov 2019 Nourah ALRossais, Daniel Kudenko

In the context of stereotypes creation for recommender systems, we found that certain types of categorical variables pose particular challenges if simple clustering procedures were employed with the objective to create stereotypes.

Recommendation Systems

Resource Abstraction for Reinforcement Learning in Multiagent Congestion Problems

no code implementations13 Mar 2019 Kleanthis Malialis, Sam Devlin, Daniel Kudenko

These are learning time, scalability and decentralised coordination i. e. no communication between the learning agents.

Deep Multi-Agent Reinforcement Learning with Relevance Graphs

1 code implementation30 Nov 2018 Aleksandra Malysheva, Tegg Taekyong Sung, Chae-Bong Sohn, Daniel Kudenko, Aleksei Shpilman

Over recent years, deep reinforcement learning has shown strong successes in complex single-agent tasks, and more recently this approach has also been applied to multi-agent domains.

Multi-agent Reinforcement Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.