Search Results for author: Zikang Leng

Found 4 papers, 2 papers with code

IMUGPT 2.0: Language-Based Cross Modality Transfer for Sensor-Based Human Activity Recognition

1 code implementation1 Feb 2024 Zikang Leng, Amitrajit Bhattacharjee, Hrudhai Rajasekhar, Lizhe Zhang, Elizabeth Bruda, Hyeokhyen Kwon, Thomas Plötz

With the emergence of generative AI models such as large language models (LLMs) and text-driven motion synthesis models, language has become a promising source data modality as well as shown in proof of concepts such as IMUGPT.

Human Activity Recognition Motion Synthesis

On the Benefit of Generative Foundation Models for Human Activity Recognition

no code implementations18 Oct 2023 Zikang Leng, Hyeokhyen Kwon, Thomas Plötz

In human activity recognition (HAR), the limited availability of annotated data presents a significant challenge.

Human Activity Recognition Motion Synthesis

Generating Virtual On-body Accelerometer Data from Virtual Textual Descriptions for Human Activity Recognition

1 code implementation4 May 2023 Zikang Leng, Hyeokhyen Kwon, Thomas Plötz

We benchmarked our approach on three HAR datasets (RealWorld, PAMAP2, and USC-HAD) and demonstrate that the use of virtual IMU training data generated using our new approach leads to significantly improved HAR model performance compared to only using real IMU data.

Human Activity Recognition Motion Synthesis

Fine-grained Human Activity Recognition Using Virtual On-body Acceleration Data

no code implementations2 Nov 2022 Zikang Leng, Yash Jain, Hyeokhyen Kwon, Thomas Plötz

In this work we first introduce a measure to quantitatively assess the subtlety of human movements that are underlying activities of interest--the motion subtlety index (MSI)--which captures local pixel movements and pose changes in the vicinity of target virtual sensor locations, and correlate it to the eventual activity recognition accuracy.

Human Activity Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.