no code implementations • 17 Dec 2022 • Kas Kniesmeijer, Murat Kirtay
This study uses multisensory data (i. e., color and depth) to recognize human actions in the context of multimodal human-robot interaction.
no code implementations • 14 Sep 2020 • Murat Kirtay, Guido Schillaci, Verena V. Hafner
This study presents a multisensory machine learning architecture for object recognition by employing a novel dataset that was constructed with the iCub robot, which is equipped with three cameras and a depth sensor.
no code implementations • 4 Mar 2020 • Murat Kirtay, Ugo Albanese, Lorenzo Vannucci, Guido Schillaci, Cecilia Laschi, Egidio Falotico
This document presents novel datasets, constructed by employing the iCub robot equipped with an additional depth sensor and color camera.