Search Results for author: Dominique Duhaut

Found 3 papers, 1 papers with code

Robots Learn Increasingly Complex Tasks with Intrinsic Motivation and Automatic Curriculum Learning

no code implementations11 Feb 2022 Sao Mai Nguyen, Nicolas Duminy, Alexandre Manoury, Dominique Duhaut, Cédric Buche

Multi-task learning by robots poses the challenge of the domain knowledge: complexity of tasks, complexity of the actions required, relationship between tasks for transfer learning.

Hierarchical Reinforcement Learning Imitation Learning +1

Intrinsically Motivated Open-Ended Multi-Task Learning Using Transfer Learning to Discover Task Hierarchy

1 code implementation19 Feb 2021 Nicolas Duminy, Sao Mai Nguyen, Junshuai Zhu, Dominique Duhaut, Jerome Kerdreux

We hypothesise that the most complex tasks can be learned more easily by transferring knowledge from simpler tasks, and faster by adapting the complexity of the actions to the task.

Active Learning Hierarchical Reinforcement Learning +2

Learning a Set of Interrelated Tasks by Using Sequences of Motor Policies for a Strategic Intrinsically Motivated Learner

no code implementations11 Oct 2018 Nicolas Duminy, Sao Mai Nguyen, Dominique Duhaut

We propose an active learning architecture for robots, capable of organizing its learning process to achieve a field of complex tasks by learning sequences of motor policies, called Intrinsically Motivated Procedure Babbling (IM-PB).

Active Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.