Search Results for author: Stephane Doncieux

Found 17 papers, 7 papers with code

Integrating LLMs and Decision Transformers for Language Grounded Generative Quality-Diversity

1 code implementation25 Aug 2023 Achkan Salehi, Stephane Doncieux

Quality-Diversity is a branch of stochastic optimization that is often applied to problems from the Reinforcement Learning and control domains in order to construct repertoires of well-performing policies/skills that exhibit diversity with respect to a behavior space.

Language Modelling Large Language Model +2

Adaptive Asynchronous Control Using Meta-learned Neural Ordinary Differential Equations

no code implementations25 Jul 2022 Achkan Salehi, Steffen Rühl, Stephane Doncieux

Model-based Reinforcement Learning and Control have demonstrated great potential in various sequential decision making problem domains, including in robotics settings.

Decision Making Meta-Learning +3

Geodesics, Non-linearities and the Archive of Novelty Search

no code implementations6 May 2022 Achkan Salehi, Alexandre Coninx, Stephane Doncieux

An argument that is often encountered in the literature is that the archive prevents exploration from backtracking or cycling, i. e. from revisiting previously encountered areas in the behavior space.

Towards QD-suite: developing a set of benchmarks for Quality-Diversity algorithms

no code implementations6 May 2022 Achkan Salehi, Stephane Doncieux

Inspired by recent works on Reinforcement Learning benchmarks, we argue that the identification of challenges faced by QD methods and the development of targeted, challenging, scalable but affordable benchmarks is an important step.

Stochastic Optimization

Discovering and Exploiting Sparse Rewards in a Learned Behavior Space

1 code implementation2 Nov 2021 Giuseppe Paolo, Miranda Coninx, Alban Laflaquière, Stephane Doncieux

Learning optimal policies in sparse rewards settings is difficult as the learning agent has little to no feedback on the quality of its actions.

Efficient Exploration

Few-shot Quality-Diversity Optimization

1 code implementation14 Sep 2021 Achkan Salehi, Alexandre Coninx, Stephane Doncieux

Experiments carried in both sparse and dense reward settings using robotic manipulation and navigation benchmarks show that it considerably reduces the number of generations that are required for QD optimization in these environments.

Meta-Learning reinforcement-learning +1

BR-NS: an Archive-less Approach to Novelty Search

1 code implementation8 Apr 2021 Achkan Salehi, Alexandre Coninx, Stephane Doncieux

In this paper, we discuss an alternative approach to novelty estimation, dubbed Behavior Recognition based Novelty Search (BR-NS), which does not require an archive, makes no assumption on the metrics that can be defined in the behavior space and does not rely on nearest neighbours search.

Sparse Reward Exploration via Novelty Search and Emitters

1 code implementation5 Feb 2021 Giuseppe Paolo, Alexandre Coninx, Stephane Doncieux, Alban Laflaquière

Contrary to existing emitters-based approaches, SERENE separates the search space exploration and reward exploitation into two alternating processes.

Efficient Exploration

Novelty Search makes Evolvability Inevitable

2 code implementations13 May 2020 Stephane Doncieux, Giuseppe Paolo, Alban Laflaquière, Alexandre Coninx

Evolvability is thus a natural byproduct of the search in this context.

Unsupervised Learning and Exploration of Reachable Outcome Space

1 code implementation12 Sep 2019 Giuseppe Paolo, Alban Laflaquière, Alexandre Coninx, Stephane Doncieux

Results show that TAXONS can find a diverse set of controllers, covering a good part of the ground-truth outcome space, while having no information about such space.

From exploration to control: learning object manipulation skills through novelty search and local adaptation

no code implementations3 Jan 2019 Seungsu Kim, Alexandre Coninx, Stephane Doncieux

The approach has been validated on two different experiments on the Baxter robot: a ball launching and a joystick manipulation tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.