no code implementations • 26 Sep 2017 • Felix Hülsmann, Stefan Kopp, Mario Botsch
The selected features are used as input for Support Vector Machines, which finally classify the movement errors.
no code implementations • 23 Oct 2018 • Sebastian Kahl, Stefan Kopp
During interaction with others, we perceive and produce social actions in close temporal distance or even simultaneously.
no code implementations • WS 2017 • Ramin Yaghoubzadeh, Stefan Kopp
We present the flexdiam dialogue management architecture, which was developed in a series of projects dedicated to tailoring spoken interaction to the needs of users with cognitive impairments in an everyday assistive domain, using a multimodal front-end.
no code implementations • LREC 2014 • Hendrik Buschmeier, Zofia Malisz, Joanna Skubisz, Marcin Wlodarczak, Ipke Wachsmuth, Stefan Kopp, Petra Wagner
The Active Listening Corpus (ALICO) is a multimodal database of spontaneous dyadic conversations with diverse speech and gestural annotations of both dialogue partners.
no code implementations • 23 Sep 2019 • Jan Pöppel, Stefan Kopp
The ability to interpret the mental state of another agent based on its behavior, also called Theory of Mind (ToM), is crucial for humans in any kind of social interaction.
no code implementations • 2 Dec 2021 • Jan Pöppel, Sebastian Kahl, Stefan Kopp
The results indicate that belief resonance and active inference allow for quick and efficient agent coordination, and thus can serve as a building block for collaborative cognitive agents.
no code implementations • 10 Dec 2021 • Sebastian Kahl, Sebastian Wiese, Nele Russwinkel, Stefan Kopp
In particular we focus on how an agent can be equipped with a sense of control and how it arises in autonomous situated action and, in turn, influences action control.
no code implementations • 8 Feb 2022 • Hendric Voß, Heiko Wersing, Stefan Kopp
Detecting mental states of human users is crucial for the development of cooperative and intelligent robots, as it enables the robot to understand the user's intentions and desires.
1 code implementation • 2 May 2023 • Hendric Voß, Stefan Kopp
By learning the mapping of a latent space representation as opposed to directly mapping it to a vector representation, this framework facilitates the generation of highly realistic and expressive gestures that closely replicate human movement and behavior, while simultaneously avoiding artifacts in the generation process.
Ranked #1 on Gesture Generation on TED Gesture Dataset