Search Results for author: Alban Laflaquière

Found 19 papers, 6 papers with code

Discovering and Exploiting Sparse Rewards in a Learned Behavior Space

1 code implementation2 Nov 2021 Giuseppe Paolo, Miranda Coninx, Alban Laflaquière, Stephane Doncieux

Learning optimal policies in sparse rewards settings is difficult as the learning agent has little to no feedback on the quality of its actions.

Efficient Exploration

Sparse Reward Exploration via Novelty Search and Emitters

1 code implementation5 Feb 2021 Giuseppe Paolo, Alexandre Coninx, Stephane Doncieux, Alban Laflaquière

Contrary to existing emitters-based approaches, SERENE separates the search space exploration and reward exploitation into two alternating processes.

Efficient Exploration

Emergence of Spatial Coordinates via Exploration

no code implementations29 Oct 2020 Alban Laflaquière

Spatial knowledge is a fundamental building block for the development of advanced perceptive and cognitive abilities.

Novelty Search makes Evolvability Inevitable

2 code implementations13 May 2020 Stephane Doncieux, Giuseppe Paolo, Alban Laflaquière, Alexandre Coninx

Evolvability is thus a natural byproduct of the search in this context.

Unsupervised Learning and Exploration of Reachable Outcome Space

1 code implementation12 Sep 2019 Giuseppe Paolo, Alban Laflaquière, Alexandre Coninx, Stephane Doncieux

Results show that TAXONS can find a diverse set of controllers, covering a good part of the ground-truth outcome space, while having no information about such space.

Unsupervised Emergence of Egocentric Spatial Structure from Sensorimotor Prediction

1 code implementation NeurIPS 2019 Alban Laflaquière, Michael Garcia Ortiz

Despite its omnipresence in robotics application, the nature of spatial knowledge and the mechanisms that underlie its emergence in autonomous agents are still poorly understood.

Position

Self-supervised Body Image Acquisition Using a Deep Neural Network for Sensorimotor Prediction

1 code implementation3 Jun 2019 Alban Laflaquière, Verena V. Hafner

This work investigates how a naive agent can acquire its own body image in a self-supervised way, based on the predictability of its sensorimotor experience.

Identification of Invariant Sensorimotor Structures as a Prerequisite for the Discovery of Objects

no code implementations11 Oct 2018 Nicolas Le Hir, Olivier Sigaud, Alban Laflaquière

Our model is based on processing the unsupervised interaction of an artificial agent with its environment.

Clustering

Grounding Perception: A Developmental Approach to Sensorimotor Contingencies

no code implementations3 Oct 2018 Alban Laflaquière, Nikolas Hemion, Michaël Garcia Ortiz, Jean-Christophe Baillie

Sensorimotor contingency theory offers a promising account of the nature of perception, a topic rarely addressed in the robotics community.

A Non-linear Approach to Space Dimension Perception by a Naive Agent

no code implementations3 Oct 2018 Alban Laflaquière, Sylvain Argentieri, Olivia Breysse, Stéphane Genet, Bruno Gas

A new approach is to consider perception as an experimentally acquired ability that is learned exclusively through the analysis of the agent's sensorimotor flow.

Learning an internal representation of the end-effector configuration space

no code implementations3 Oct 2018 Alban Laflaquière, Alexander V. Terekhov, Bruno Gas, J. Kevin O'Regan

Current machine learning techniques proposed to automatically discover a robot kinematics usually rely on a priori information about the robot's structure, sensors properties or end-effector position.

BIG-bench Machine Learning Position

Learning agent's spatial configuration from sensorimotor invariants

no code implementations3 Oct 2018 Alban Laflaquière, J. Kevin O'Regan, Sylvain Argentieri, Bruno Gas, Alexander V. Terekhov

We show that the notion of space as environment-independent cannot be deduced solely from exteroceptive information, which is highly variable and is mainly determined by the contents of the environment.

Grounding the Experience of a Visual Field through Sensorimotor Contingencies

no code implementations3 Oct 2018 Alban Laflaquière

The sensorimotor contingencies theory proposes to ground the development of those perceptive abilities in the way the agent can actively transform its sensory inputs.

Unsupervised Emergence of Spatial Structure from Sensorimotor Prediction

no code implementations2 Oct 2018 Alban Laflaquière, Michael Garcia Ortiz

Despite its omnipresence in robotics application, the nature of spatial knowledge and the mechanisms that underlie its emergence in autonomous agents are still poorly understood.

Discovering space - Grounding spatial topology and metric regularity in a naive agent's sensorimotor experience

no code implementations7 Jun 2018 Alban Laflaquière, J. Kevin O'Regan, Bruno Gas, Alexander Terekhov

We here show that the structure of space can be autonomously discovered by a naive agent in the form of sensorimotor regularities, that correspond to so called compensable sensory experiences: these are experiences that can be generated either by the agent or its environment.

Learning Representations of Spatial Displacement through Sensorimotor Prediction

no code implementations16 May 2018 Michael Garcia Ortiz, Alban Laflaquière

Robots act in their environment through sequences of continuous motor commands.

A Sensorimotor Perspective on Grounding the Semantic of Simple Visual Features

no code implementations11 May 2018 Alban Laflaquière

Without any a priori knowledge about the way its sensorimotor information is encoded, we show how an agent can characterize the uniformity and edge-ness of the visual features it interacts with.

Grounding object perception in a naive agent's sensorimotor experience

no code implementations26 Sep 2016 Alban Laflaquière, Nikolas Hemion

Artificial object perception usually relies on a priori defined models and feature extraction algorithms.

Object

Autonomous Grounding of Visual Field Experience through Sensorimotor Prediction

no code implementations3 Aug 2016 Alban Laflaquière

In a developmental framework, autonomous robots need to explore the world and learn how to interact with it.

Cannot find the paper you are looking for? You can Submit a new open access paper.