no code implementations • 8 Oct 2024 • Rajmund Nagy, Hendric Voss, Youngwoo Yoon, Taras Kucherenko, Teodor Nikolov, Thanh Hoang-Minh, Rachel McDonnell, Stefan Kopp, Michael Neff, Gustav Eje Henter
Current evaluation practices in speech-driven gesture generation lack standardisation and focus on aspects that are easy to measure over aspects that actually matter.
no code implementations • 6 Aug 2024 • Amelie Robrecht, Judith Sieker, Clara Lachenmaier, Sina Zarieß, Stefan Kopp
In this work, we want to give an overview on which pragmatic abilities have been tested in LLMs so far and how these tests have been carried out.
no code implementations • 18 Jun 2024 • Amelie Sophie Robrecht, Hendric Voss, Lisa Gottschalk, Stefan Kopp
This study addresses the effect of gestures in explanation by developing an embodied virtual explainer integrating both beat gestures and iconic gestures to enhance its automatically generated verbal explanations.
1 code implementation • 2 May 2023 • Hendric Voß, Stefan Kopp
By learning the mapping of a latent space representation as opposed to directly mapping it to a vector representation, this framework facilitates the generation of highly realistic and expressive gestures that closely replicate human movement and behavior, while simultaneously avoiding artifacts in the generation process.
Ranked #1 on Gesture Generation on TED Gesture Dataset
no code implementations • 8 Feb 2022 • Hendric Voß, Heiko Wersing, Stefan Kopp
Detecting mental states of human users is crucial for the development of cooperative and intelligent robots, as it enables the robot to understand the user's intentions and desires.
no code implementations • 10 Dec 2021 • Sebastian Kahl, Sebastian Wiese, Nele Russwinkel, Stefan Kopp
In particular we focus on how an agent can be equipped with a sense of control and how it arises in autonomous situated action and, in turn, influences action control.
no code implementations • 2 Dec 2021 • Jan Pöppel, Sebastian Kahl, Stefan Kopp
The results indicate that belief resonance and active inference allow for quick and efficient agent coordination, and thus can serve as a building block for collaborative cognitive agents.
no code implementations • 23 Sep 2019 • Jan Pöppel, Stefan Kopp
The ability to interpret the mental state of another agent based on its behavior, also called Theory of Mind (ToM), is crucial for humans in any kind of social interaction.
no code implementations • 23 Oct 2018 • Sebastian Kahl, Stefan Kopp
During interaction with others, we perceive and produce social actions in close temporal distance or even simultaneously.
no code implementations • 26 Sep 2017 • Felix Hülsmann, Stefan Kopp, Mario Botsch
The selected features are used as input for Support Vector Machines, which finally classify the movement errors.
no code implementations • WS 2017 • Ramin Yaghoubzadeh, Stefan Kopp
We present the flexdiam dialogue management architecture, which was developed in a series of projects dedicated to tailoring spoken interaction to the needs of users with cognitive impairments in an everyday assistive domain, using a multimodal front-end.
no code implementations • LREC 2014 • Hendrik Buschmeier, Zofia Malisz, Joanna Skubisz, Marcin Wlodarczak, Ipke Wachsmuth, Stefan Kopp, Petra Wagner
The Active Listening Corpus (ALICO) is a multimodal database of spontaneous dyadic conversations with diverse speech and gestural annotations of both dialogue partners.