1 code implementation • 1 Feb 2024 • Zikang Leng, Amitrajit Bhattacharjee, Hrudhai Rajasekhar, Lizhe Zhang, Elizabeth Bruda, Hyeokhyen Kwon, Thomas Plötz
With the emergence of generative AI models such as large language models (LLMs) and text-driven motion synthesis models, language has become a promising source data modality as well as shown in proof of concepts such as IMUGPT.
no code implementations • 16 Nov 2023 • Srivatsa P, Thomas Plötz
To overcome this limitation, we propose a novel graph-guided neural network approach that performs activity recognition by learning explicit co-firing relationships between sensors.
no code implementations • 18 Oct 2023 • Zikang Leng, Hyeokhyen Kwon, Thomas Plötz
In human activity recognition (HAR), the limited availability of annotated data presents a significant challenge.
1 code implementation • 4 May 2023 • Zikang Leng, Hyeokhyen Kwon, Thomas Plötz
We benchmarked our approach on three HAR datasets (RealWorld, PAMAP2, and USC-HAD) and demonstrate that the use of virtual IMU training data generated using our new approach leads to significantly improved HAR model performance compared to only using real IMU data.
no code implementations • 2 Nov 2022 • Zikang Leng, Yash Jain, Hyeokhyen Kwon, Thomas Plötz
In this work we first introduce a measure to quantitatively assess the subtlety of human movements that are underlying activities of interest--the motion subtlety index (MSI)--which captures local pixel movements and pose changes in the vicinity of target virtual sensor locations, and correlate it to the eventual activity recognition accuracy.
no code implementations • 22 Feb 2022 • Harish Haresamudram, Irfan Essa, Thomas Plötz
As such, self-supervision, i. e., the paradigm of 'pretrain-then-finetune' has the potential to become a strong alternative to the predominant end-to-end training approaches, let alone hand-crafted features for the classic activity recognition chain.
no code implementations • 25 Dec 2013 • Sourav Bhattacharya, Petteri Nurmi, Nils Hammerla, Thomas Plötz
We propose a sparse-coding framework for activity recognition in ubiquitous and mobile computing that alleviates two fundamental problems of current supervised learning approaches.