Search Results for author: Michael Gienger

Found 11 papers, 0 papers with code

To Help or Not to Help: LLM-based Attentive Support for Human-Robot Group Interactions

no code implementations19 Mar 2024 Daniel Tanneberg, Felix Ocker, Stephan Hasler, Joerg Deigmoeller, Anna Belardinelli, Chao Wang, Heiko Wersing, Bernhard Sendhoff, Michael Gienger

In addition to following user instructions, Attentive Support is capable of deciding when and how to support the humans, and when to remain silent to not disturb the group.

Common Sense Reasoning

CoPAL: Corrective Planning of Robot Actions with Large Language Models

no code implementations11 Oct 2023 Frank Joublin, Antonello Ceravola, Pavel Smirnov, Felix Ocker, Joerg Deigmoeller, Anna Belardinelli, Chao Wang, Stephan Hasler, Daniel Tanneberg, Michael Gienger

In the pursuit of fully autonomous robotic systems capable of taking over tasks traditionally performed by humans, the complexity of open-world environments poses a considerable challenge.

Motion Planning Task and Motion Planning

Learning Type-Generalized Actions for Symbolic Planning

no code implementations9 Aug 2023 Daniel Tanneberg, Michael Gienger

Symbolic planning is a powerful technique to solve complex tasks that require long sequences of actions and can equip an intelligent agent with complex behavior.

A Glimpse in ChatGPT Capabilities and its impact for AI research

no code implementations10 May 2023 Frank Joublin, Antonello Ceravola, Joerg Deigmoeller, Michael Gienger, Mathias Franzius, Julian Eggert

Large language models (LLMs) have recently become a popular topic in the field of Artificial Intelligence (AI) research, with companies such as Google, Amazon, Facebook, Amazon, Tesla, and Apple (GAFA) investing heavily in their development.

Question Answering Text Generation

Robotic Fabric Flattening with Wrinkle Direction Detection

no code implementations8 Mar 2023 Yulei Qiu, Jihong Zhu, Cosimo Della Santina, Michael Gienger, Jens Kober

Deformable Object Manipulation (DOM) is an important field of research as it contributes to practical tasks such as automatic cloth handling, cable routing, surgical operation, etc.

Deformable Object Manipulation

ROS-PyBullet Interface: A Framework for Reliable Contact Simulation and Human-Robot Interaction

no code implementations13 Oct 2022 Christopher E. Mower, Theodoros Stouraitis, João Moura, Christian Rauch, Lei Yan, Nazanin Zamani Behabadi, Michael Gienger, Tom Vercauteren, Christos Bergeles, Sethu Vijayakumar

However, there is a lack of software connecting reliable contact simulation with the larger robotics ecosystem (i. e. ROS, Orocos), for a more seamless application of novel approaches, found in the literature, to existing robotic hardware.

Distilled Domain Randomization

no code implementations6 Dec 2021 Julien Brosseit, Benedikt Hahner, Fabio Muratore, Michael Gienger, Jan Peters

However, these methods are notorious for the enormous amount of required training data which is prohibitively expensive to collect on real robots.

reinforcement-learning Reinforcement Learning (RL)

Robot Learning from Randomized Simulations: A Review

no code implementations1 Nov 2021 Fabio Muratore, Fabio Ramos, Greg Turk, Wenhao Yu, Michael Gienger, Jan Peters

The rise of deep learning has caused a paradigm shift in robotics research, favoring methods that require large amounts of data.

Set-based State Estimation with Probabilistic Consistency Guarantee under Epistemic Uncertainty

no code implementations18 Oct 2021 Shen Li, Theodoros Stouraitis, Michael Gienger, Sethu Vijayakumar, Julie A. Shah

Consistent state estimation is challenging, especially under the epistemic uncertainties arising from learned (nonlinear) dynamic and observation models.

Data-efficient Domain Randomization with Bayesian Optimization

no code implementations5 Mar 2020 Fabio Muratore, Christian Eilers, Michael Gienger, Jan Peters

Domain randomization methods tackle this problem by randomizing the physics simulator (source domain) during training according to a distribution over domain parameters in order to obtain more robust policies that are able to overcome the reality gap.

Bayesian Optimization

Assessing Transferability from Simulation to Reality for Reinforcement Learning

no code implementations10 Jul 2019 Fabio Muratore, Michael Gienger, Jan Peters

Optimizing a policy on a slightly faulty simulator can easily lead to the maximization of the `Simulation Optimization Bias` (SOB).

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.