Search Results for author: Ioannis Papaioannou

Found 7 papers, 2 papers with code

AlanaVLM: A Multimodal Embodied AI Foundation Model for Egocentric Video Understanding

1 code implementation19 Jun 2024 Alessandro Suglia, Claudio Greco, Katie Baker, Jose L. Part, Ioannis Papaioannou, Arash Eshghi, Ioannis Konstas, Oliver Lemon

First, we introduce the Egocentric Video Understanding Dataset (EVUD) for training VLMs on video captioning and question answering tasks specific to egocentric videos.

Question Answering Video Captioning +2

Sympathy Begins with a Smile, Intelligence Begins with a Word: Use of Multimodal Features in Spoken Human-Robot Interaction

no code implementations WS 2017 Jekaterina Novikova, Christian Dondrup, Ioannis Papaioannou, Oliver Lemon

We find that happiness in the user's recognised facial expression strongly correlates with likeability of a robot, while dialogue-related features (such as number of human turns or number of sentences per robot utterance) correlate with perceiving a robot as intelligent.

Cannot find the paper you are looking for? You can Submit a new open access paper.