Search Results for author: Wiktor Mucha

Found 4 papers, 0 papers with code

In My Perspective, In My Hands: Accurate Egocentric 2D Hand Pose and Action Recognition

no code implementations14 Apr 2024 Wiktor Mucha, Martin Kampel

Our study aims to fill this research gap by exploring the field of 2D hand pose estimation for egocentric action recognition, making two contributions.

Action Recognition Action Understanding +3

TEXT2TASTE: A Versatile Egocentric Vision System for Intelligent Reading Assistance Using Large Language Model

no code implementations14 Apr 2024 Wiktor Mucha, Florin Cuconasu, Naome A. Etori, Valia Kalokyri, Giovanni Trappolini

The LLM processes the data and allows the user to interact with the text and responds to a given query, thus extending the functionality of corrective lenses with the ability to find and summarize knowledge from the text.

Language Modelling Large Language Model +4

Human Action Recognition in Egocentric Perspective Using 2D Object and Hands Pose

no code implementations8 Jun 2023 Wiktor Mucha, Martin Kampel

Egocentric action recognition is essential for healthcare and assistive technology that relies on egocentric cameras because it allows for the automatic and continuous monitoring of activities of daily living (ADLs) without requiring any conscious effort from the user.

Action Classification Action Recognition +1

State of the Art of Audio- and Video-Based Solutions for AAL

no code implementations26 Jun 2022 Slavisa Aleksic, Michael Atanasov, Jean Calleja Agius, Kenneth Camilleri, Anto Cartolovni, Pau Climent-Peerez, Sara Colantonio, Stefania Cristina, Vladimir Despotovic, Hazim Kemal Ekenel, Ekrem Erakin, Francisco Florez-Revuelta, Danila Germanese, Nicole Grech, Steinunn Gróa Sigurðardóttir, Murat Emirzeoglu, Ivo Iliev, Mladjan Jovanovic, Martin Kampel, William Kearns, Andrzej Klimczuk, Lambros Lambrinos, Jennifer Lumetzberger, Wiktor Mucha, Sophie Noiret, Zada Pajalic, Rodrigo Rodriguez Peerez, Galidiya Petrova, Sintija Petrovica, Peter Pocta, Angelica Poli, Mara Pudane, Susanna Spinsante, Albert Ali Salah, Maria Jose Santofimia, Anna Sigridur Islind, Lacramioara Stoicu-Tivadar, Hilda Tellioglu, Andrej Zgank

The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation.

Gesture Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.