Deep learning control of artificial avatars in group coordination tasks

11 Jun 2019  ·  Maria Lombardi, Davide Liuzza, Mario di Bernardo ·

In many joint-action scenarios, humans and robots have to coordinate their movements to accomplish a given shared task. Lifting an object together, sawing a wood log, transferring objects from a point to another are all examples where motor coordination between humans and machines is a crucial requirement. While the dyadic coordination between a human and a robot has been studied in previous investigations, the multi-agent scenario in which a robot has to be integrated into a human group still remains a less explored field of research. In this paper we discuss how to synthesise an artificial agent able to coordinate its motion in human ensembles. Driven by a control architecture based on deep reinforcement learning, such an artificial agent will be able to autonomously move itself in order to synchronise its motion with that of the group while exhibiting human-like kinematic features. As a paradigmatic coordination task we take a group version of the so-called mirror-game which is highlighted as a good benchmark in the human movement literature.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here