1 code implementation • 4 May 2023 • Zikang Leng, Hyeokhyen Kwon, Thomas Plötz
We benchmarked our approach on three HAR datasets (RealWorld, PAMAP2, and USC-HAD) and demonstrate that the use of virtual IMU training data generated using our new approach leads to significantly improved HAR model performance compared to only using real IMU data.
1 code implementation • 1 Feb 2024 • Zikang Leng, Amitrajit Bhattacharjee, Hrudhai Rajasekhar, Lizhe Zhang, Elizabeth Bruda, Hyeokhyen Kwon, Thomas Plötz
With the emergence of generative AI models such as large language models (LLMs) and text-driven motion synthesis models, language has become a promising source data modality as well as shown in proof of concepts such as IMUGPT.
no code implementations • 2 Nov 2022 • Zikang Leng, Yash Jain, Hyeokhyen Kwon, Thomas Plötz
In this work we first introduce a measure to quantitatively assess the subtlety of human movements that are underlying activities of interest--the motion subtlety index (MSI)--which captures local pixel movements and pose changes in the vicinity of target virtual sensor locations, and correlate it to the eventual activity recognition accuracy.
no code implementations • 18 Oct 2023 • Zikang Leng, Hyeokhyen Kwon, Thomas Plötz
In human activity recognition (HAR), the limited availability of annotated data presents a significant challenge.