no code implementations • SLTAT (LREC) 2022 • Katerina Papadimitriou, Gerasimos Potamianos, Galini Sapountzaki, Theodore Goulas, Eleni Efthimiou, Stavroula-Evita Fotinea, Petros Maragos
There has been increasing interest lately in developing education tools for sign language (SL) learning that enable self-assessment and objective evaluation of learners’ SL productions, assisting both students and their instructors.
no code implementations • ICASSP 2022 • Alexandros Koumparoulis, Gerasimos Potamianos
We present a novel resource-efficient end-to-end architecture for lipreading that achieves state-of-the-art results on a popular and challenging benchmark.
Ranked #2 on Lipreading on Lip Reading in the Wild
1 code implementation • 30 Oct 2020 • Panagiotis Paraskevas Filntisis, Niki Efthymiou, Gerasimos Potamianos, Petros Maragos
We present our winning submission to the First International Workshop on Bodily Expressed Emotion Understanding (BEEU) challenge.
no code implementations • 28 Aug 2020 • Niki Efthymiou, Panagiotis P. Filntisis, Petros Koutras, Antigoni Tsiami, Jack Hadfield, Gerasimos Potamianos, Petros Maragos
In this paper we present an integrated robotic system capable of participating in and performing a wide range of educational and entertainment tasks, in collaboration with one or more children.
no code implementations • 18 Apr 2020 • Spyridon Thermos, Petros Daras, Gerasimos Potamianos
In particular, we design an autoencoder that is trained using ground-truth labels of only the last frame of the sequence, and is able to infer pixel-wise affordance labels in both videos and static images.
1 code implementation • 7 Jan 2019 • Panagiotis P. Filntisis, Niki Efthymiou, Petros Koutras, Gerasimos Potamianos, Petros Maragos
In this paper we address the problem of multi-cue affect recognition in challenging scenarios such as child-robot interaction.
no code implementations • CVPR 2017 • Spyridon Thermos, Georgios Th. Papadopoulos, Petros Daras, Gerasimos Potamianos
It is well-established by cognitive neuroscience that human perception of objects constitutes a complex process, where object appearance information is combined with evidence about the so-called object "affordances", namely the types of actions that humans typically perform when interacting with them.
no code implementations • LREC 2012 • Panagiotis Giannoulis, Gerasimos Potamianos
We examine speaker independent emotion classification from speech, reporting experiments on the Berlin database across six basic emotions.