Search Results for author: Matthias Kerzel

Found 26 papers, 7 papers with code

Diffusing in Someone Else's Shoes: Robotic Perspective Taking with Diffusion

no code implementations11 Apr 2024 Josua Spisak, Matthias Kerzel, Stefan Wermter

Being able to mentally transfer from a demonstration seen from a third-person perspective to how it should look from a first-person perspective is fundamental for this ability in humans.

CycleIK: Neuro-inspired Inverse Kinematics

1 code implementation21 Jul 2023 Jan-Gerrit Habekost, Erik Strahl, Philipp Allgeuer, Matthias Kerzel, Stefan Wermter

The paper introduces CycleIK, a neuro-robotic approach that wraps two novel neuro-inspired methods for the inverse kinematics (IK) task, a Generative Adversarial Network (GAN), and a Multi-Layer Perceptron architecture.

Generative Adversarial Network

Clarifying the Half Full or Half Empty Question: Multimodal Container Classification

no code implementations17 Jul 2023 Josua Spisak, Matthias Kerzel, Stefan Wermter

Multimodal integration is a key component of allowing robots to perceive the world.

Learning to Autonomously Reach Objects with NICO and Grow-When-Required Networks

no code implementations14 Oct 2022 Nima Rahrakhshan, Matthias Kerzel, Philipp Allgeuer, Nicolas Duczek, Stefan Wermter

The act of reaching for an object is a fundamental yet complex skill for a robotic agent, requiring a high degree of visuomotor control and coordination.

Object

Intelligent problem-solving as integrated hierarchical reinforcement learning

no code implementations18 Aug 2022 Manfred Eppe, Christian Gumbsch, Matthias Kerzel, Phuong D. H. Nguyen, Martin V. Butz, Stefan Wermter

According to cognitive psychology and related disciplines, the development of complex problem-solving behaviour in biological agents depends on hierarchical cognitive mechanisms.

Hierarchical Reinforcement Learning reinforcement-learning +1

Learning Flexible Translation between Robot Actions and Language Descriptions

no code implementations15 Jul 2022 Ozan Özdemir, Matthias Kerzel, Cornelius Weber, Jae Hee Lee, Stefan Wermter

In this work, we propose the paired gated autoencoders (PGAE) for flexible translation between robot actions and language descriptions in a tabletop object manipulation scenario.

Language Modelling Multi-Task Learning +1

Knowing Earlier what Right Means to You: A Comprehensive VQA Dataset for Grounding Relative Directions via Multi-Task Learning

1 code implementation6 Jul 2022 Kyra Ahrens, Matthias Kerzel, Jae Hee Lee, Cornelius Weber, Stefan Wermter

Spatial reasoning poses a particular challenge for intelligent agents and is at the same time a prerequisite for their successful interaction and communication in the physical world.

Multi-Task Learning Question Answering +1

What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task Learning

1 code implementation5 May 2022 Jae Hee Lee, Matthias Kerzel, Kyra Ahrens, Cornelius Weber, Stefan Wermter

Grounding relative directions is more difficult than grounding absolute directions because it not only requires a model to detect objects in the image and to identify spatial relation based on this information, but it also needs to recognize the orientation of objects and integrate this information into the reasoning process.

Multi-Task Learning Question Answering +1

Language Model-Based Paired Variational Autoencoders for Robotic Language Learning

no code implementations17 Jan 2022 Ozan Özdemir, Matthias Kerzel, Cornelius Weber, Jae Hee Lee, Stefan Wermter

Human infants learn language while interacting with their environment in which their caregivers may describe the objects and actions they perform.

Language Modelling

A trained humanoid robot can perform human-like crossmodal social attention and conflict resolution

no code implementations2 Nov 2021 Di Fu, Fares Abawi, Hugo Carneiro, Matthias Kerzel, Ziwei Chen, Erik Strahl, Xun Liu, Stefan Wermter

Our saliency prediction model was trained to detect social cues, predict audio-visual saliency, and attend selectively for the robot study.

Saliency Prediction

Continual Learning from Synthetic Data for a Humanoid Exercise Robot

no code implementations19 Feb 2021 Nicolas Duczek, Matthias Kerzel, Stefan Wermter

In a practical scenario, a physical exercise is performed by an expert like a physiotherapist and then used as a reference for a humanoid robot like Pepper to give feedback on a patient's execution of the same exercise.

Continual Learning

Hierarchical principles of embodied reinforcement learning: A review

no code implementations18 Dec 2020 Manfred Eppe, Christian Gumbsch, Matthias Kerzel, Phuong D. H. Nguyen, Martin V. Butz, Stefan Wermter

We then relate these insights with contemporary hierarchical reinforcement learning methods, and identify the key machine intelligence approaches that realise these mechanisms.

Hierarchical Reinforcement Learning reinforcement-learning +1

Enhancing a Neurocognitive Shared Visuomotor Model for Object Identification, Localization, and Grasping With Learning From Auxiliary Tasks

1 code implementation26 Sep 2020 Matthias Kerzel, Fares Abawi, Manfred Eppe, Stefan Wermter

In this follow-up study, we expand the task and the model to reaching for objects in a three-dimensional space with a novel dataset based on augmented reality and a simulation environment.

Crossmodal Language Grounding in an Embodied Neurocognitive Model

1 code implementation24 Jun 2020 Stefan Heinrich, Yuan YAO, Tobias Hinz, Zhiyuan Liu, Thomas Hummel, Matthias Kerzel, Cornelius Weber, Stefan Wermter

From a neuroscientific perspective, natural language is embodied, grounded in most, if not all, sensory and sensorimotor modalities, and acquired by means of crossmodal integration.

Explainable Goal-Driven Agents and Robots -- A Comprehensive Review

no code implementations21 Apr 2020 Fatai Sado, Chu Kiong Loo, Wei Shiung Liew, Matthias Kerzel, Stefan Wermter

The recent stance on the explainability of AI systems has witnessed several approaches on eXplainable Artificial Intelligence (XAI); however, most of the studies have focused on data-driven XAI systems applied in computational sciences.

Continual Learning Explainable artificial intelligence +2

Improving Robot Dual-System Motor Learning with Intrinsically Motivated Meta-Control and Latent-Space Experience Imagination

1 code implementation19 Apr 2020 Muhammad Burhan Hafez, Cornelius Weber, Matthias Kerzel, Stefan Wermter

In this paper, we present a novel dual-system motor learning approach where a meta-controller arbitrates online between model-based and model-free decisions based on an estimate of the local reliability of the learned model.

Robotic Grasping

Solving Visual Object Ambiguities when Pointing: An Unsupervised Learning Approach

1 code implementation13 Dec 2019 Doreen Jirak, David Biertimpel, Matthias Kerzel, Stefan Wermter

The implementation of an intuitive gesture scenario is still challenging because both the pointing intention and the corresponding object have to be correctly recognized in real-time.

Object object-detection +1

Curious Meta-Controller: Adaptive Alternation between Model-Based and Model-Free Control in Deep Reinforcement Learning

no code implementations5 May 2019 Muhammad Burhan Hafez, Cornelius Weber, Matthias Kerzel, Stefan Wermter

Recent success in deep reinforcement learning for continuous control has been dominated by model-free approaches which, unlike model-based approaches, do not suffer from representational limitations in making assumptions about the world dynamics and model errors inevitable in complex domains.

Continuous Control

Deep Intrinsically Motivated Continuous Actor-Critic for Efficient Robotic Visuomotor Skill Learning

no code implementations26 Oct 2018 Muhammad Burhan Hafez, Cornelius Weber, Matthias Kerzel, Stefan Wermter

In this paper, we present a new intrinsically motivated actor-critic algorithm for learning continuous motor skills directly from raw visual input.

Continuous Control

Cannot find the paper you are looking for? You can Submit a new open access paper.