Search Results for author: Emmanuel Dellandréa

Found 11 papers, 4 papers with code

Jack of All Trades, Master of Some, a Multi-Purpose Transformer Agent

no code implementations15 Feb 2024 Quentin Gallouédec, Edward Beeching, Clément Romac, Emmanuel Dellandréa

The search for a general model that can operate seamlessly across multiple domains remains a key goal in machine learning research.

Decision Making Reinforcement Learning (RL)

Look Beyond Bias with Entropic Adversarial Data Augmentation

1 code implementation10 Jan 2023 Thomas Duboudin, Emmanuel Dellandréa, Corentin Abgrall, Gilles Hénaff, Liming Chen

Deep neural networks do not discriminate between spurious and causal patterns, and will only learn the most predictive ones while ignoring the others.

Data Augmentation

Learning Less Generalizable Patterns with an Asymmetrically Trained Double Classifier for Better Test-Time Adaptation

no code implementations17 Oct 2022 Thomas Duboudin, Emmanuel Dellandréa, Corentin Abgrall, Gilles Hénaff, Liming Chen

Indeed, test-time adaptation methods usually have to rely on a limited representation because of the shortcut learning phenomenon: only a subset of the available predictive patterns is learned with standard training.

Test-time Adaptation

Cell-Free Latent Go-Explore

1 code implementation31 Aug 2022 Quentin Gallouédec, Emmanuel Dellandréa

In this paper, we introduce Latent Go-Explore (LGE), a simple and general approach based on the Go-Explore paradigm for exploration in reinforcement learning (RL).

Montezuma's Revenge Reinforcement Learning (RL)

Multi-Goal Reinforcement Learning environments for simulated Franka Emika Panda robot

1 code implementation25 Jun 2021 Quentin Gallouédec, Nicolas Cazin, Emmanuel Dellandréa, Liming Chen

This technical report presents panda-gym, a set Reinforcement Learning (RL) environments for the Franka Emika Panda robot integrated with OpenAI Gym.

Multi-Goal Reinforcement Learning OpenAI Gym +2

Scoring Graspability based on Grasp Regression for Better Grasp Prediction

no code implementations3 Feb 2020 Amaury Depierre, Emmanuel Dellandréa, Liming Chen

Therefore, in this paper, we extend a state-of-the-art neural network with a scorer that evaluates the graspability of a given position, and introduce a novel loss function which correlates regression of grasp parameters with graspability score.

regression

Deep Multicameral Decoding for Localizing Unoccluded Object Instances from a Single RGB Image

no code implementations18 Jun 2019 Matthieu Grard, Emmanuel Dellandréa, Liming Chen

We thus also introduce a synthetic dataset of dense homogeneous object layouts, namely Mikado, which extensibly contains more instances and inter-instance occlusions per image than these public datasets.

Boundary Detection Instance Segmentation +1

Developmental Bayesian Optimization of Black-Box with Visual Similarity-Based Transfer Learning

no code implementations26 Sep 2018 Maxime Petit, Amaury Depierre, Xiaofang Wang, Emmanuel Dellandréa, Liming Chen

In simulation, we demonstrate the benefit of the transfer learning based on visual similarity, as opposed to an amnesic learning (i. e. learning from scratch all the time).

Transfer Learning

Jacquard: A Large Scale Dataset for Robotic Grasp Detection

1 code implementation30 Mar 2018 Amaury Depierre, Emmanuel Dellandréa, Liming Chen

Jacquard is built on a subset of ShapeNet, a large CAD models dataset, and contains both RGB-D images and annotations of successful grasping positions based on grasp attempts performed in a simulated environment.

Robotic Grasping

Object segmentation in depth maps with one user click and a synthetically trained fully convolutional network

no code implementations4 Jan 2018 Matthieu Grard, Romain Brégier, Florian Sella, Emmanuel Dellandréa, Liming Chen

We thus propose a step towards a practical interactive application for generating an object-oriented robotic grasp, requiring as inputs only one depth map of the scene and one user click on the next object to extract.

Instance Segmentation Object +4

Cannot find the paper you are looking for? You can Submit a new open access paper.