Search Results for author: Oier Mees

Found 15 papers, 12 papers with code

Vision-Language Models Provide Promptable Representations for Reinforcement Learning

no code implementations5 Feb 2024 William Chen, Oier Mees, Aviral Kumar, Sergey Levine

We find that our policies trained on embeddings extracted from general-purpose VLMs outperform equivalent policies trained on generic, non-promptable image embeddings.

Instruction Following reinforcement-learning +3

Audio Visual Language Maps for Robot Navigation

no code implementations13 Mar 2023 Chenguang Huang, Oier Mees, Andy Zeng, Wolfram Burgard

While interacting in the world is a multi-sensory experience, many robots continue to predominantly rely on visual perception to map and navigate in their environments.

Navigate Robot Navigation

Visual Language Maps for Robot Navigation

1 code implementation11 Oct 2022 Chenguang Huang, Oier Mees, Andy Zeng, Wolfram Burgard

Grounding language to the visual observations of a navigating agent can be performed using off-the-shelf visual-language models pretrained on Internet-scale data (e. g., image captions).

3D Reconstruction Image Captioning +1

Grounding Language with Visual Affordances over Unstructured Data

1 code implementation4 Oct 2022 Oier Mees, Jessica Borja-Diaz, Wolfram Burgard

Recent works have shown that Large Language Models (LLMs) can be applied to ground natural language to a wide variety of robot skills.

Avg. sequence length Success Rate (5 task-horizon)

Latent Plans for Task-Agnostic Offline Reinforcement Learning

1 code implementation19 Sep 2022 Erick Rosete-Beas, Oier Mees, Gabriel Kalweit, Joschka Boedecker, Wolfram Burgard

Concretely, we combine a low-level policy that learns latent skills via imitation learning and a high-level policy learned from offline reinforcement learning for skill-chaining the latent behavior priors.

Imitation Learning reinforcement-learning +1

What Matters in Language Conditioned Robotic Imitation Learning over Unstructured Data

2 code implementations13 Apr 2022 Oier Mees, Lukas Hermann, Wolfram Burgard

We have open-sourced our implementation to facilitate future research in learning to perform many complex manipulation skills in a row specified with natural language.

Imitation Learning Robot Manipulation

Affordance Learning from Play for Sample-Efficient Policy Learning

1 code implementation1 Mar 2022 Jessica Borja-Diaz, Oier Mees, Gabriel Kalweit, Lukas Hermann, Joschka Boedecker, Wolfram Burgard

Robots operating in human-centered environments should have the ability to understand how objects function: what can be done with each object, where this interaction may occur, and how the object is used to achieve a goal.

Motion Planning Object +1

CALVIN: A Benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks

1 code implementation6 Dec 2021 Oier Mees, Lukas Hermann, Erick Rosete-Beas, Wolfram Burgard

We show that a baseline model based on multi-context imitation learning performs poorly on CALVIN, suggesting that there is significant room for developing innovative agents that learn to relate human language to their world models with this benchmark.

Continuous Control Imitation Learning +3

Composing Pick-and-Place Tasks By Grounding Language

2 code implementations16 Feb 2021 Oier Mees, Wolfram Burgard

Controlling robots to perform tasks via natural language is one of the most challenging topics in human-robot interaction.

Natural Language Visual Grounding Robotic Grasping +1

Hindsight for Foresight: Unsupervised Structured Dynamics Models from Physical Interaction

no code implementations2 Aug 2020 Iman Nematollahi, Oier Mees, Lukas Hermann, Wolfram Burgard

A key challenge for an agent learning to interact with the world is to reason about physical properties of objects and to foresee their dynamics under the effect of applied forces.

Object Optical Flow Estimation +1

Learning Object Placements For Relational Instructions by Hallucinating Scene Representations

2 code implementations23 Jan 2020 Oier Mees, Alp Emek, Johan Vertens, Wolfram Burgard

One particular requirement for such robots is that they are able to understand spatial relations and can place objects in accordance with the spatial relations expressed by their user.

Auxiliary Learning Robotic Grasping +2

Adversarial Skill Networks: Unsupervised Robot Skill Learning from Video

1 code implementation21 Oct 2019 Oier Mees, Markus Merklinger, Gabriel Kalweit, Wolfram Burgard

Our method learns a general skill embedding independently from the task context by using an adversarial loss.

Continuous Control Metric Learning +4

Choosing Smartly: Adaptive Multimodal Fusion for Object Detection in Changing Environments

1 code implementation18 Jul 2017 Oier Mees, Andreas Eitel, Wolfram Burgard

Object detection is an essential task for autonomous robots operating in dynamic and changing environments.

object-detection Object Detection

Metric Learning for Generalizing Spatial Relations to New Objects

1 code implementation6 Mar 2017 Oier Mees, Nichola Abdo, Mladen Mazuran, Wolfram Burgard

Human-centered environments are rich with a wide variety of spatial relations between everyday objects.

Metric Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.