Search Results for author: Hyeokhyen Kwon

Found 8 papers, 3 papers with code

Data-Driven Depth Map Refinement via Multi-Scale Sparse Representation

no code implementations CVPR 2015 Hyeokhyen Kwon, Yu-Wing Tai, Stephen Lin

Depth maps captured by consumer-level depth cameras such as Kinect are usually degraded by noise, missing values, and quantization.

Dictionary Learning Quantization

RGB-Guided Hyperspectral Image Upsampling

no code implementations ICCV 2015 Hyeokhyen Kwon, Yu-Wing Tai

On the contrary, latest imaging sensors capture a RGB image with resolution of multiple times larger than a hyperspectral image.

IMUTube: Automatic Extraction of Virtual on-body Accelerometry from Video for Human Activity Recognition

no code implementations29 May 2020 Hyeokhyen Kwon, Catherine Tong, Harish Haresamudram, Yan Gao, Gregory D. Abowd, Nicholas D. Lane, Thomas Ploetz

The lack of large-scale, labeled data sets impedes progress in developing robust and generalized predictive models for on-body sensor-based human activity recognition (HAR).

Human Activity Recognition

Fine-grained Human Activity Recognition Using Virtual On-body Acceleration Data

no code implementations2 Nov 2022 Zikang Leng, Yash Jain, Hyeokhyen Kwon, Thomas Plötz

In this work we first introduce a measure to quantitatively assess the subtlety of human movements that are underlying activities of interest--the motion subtlety index (MSI)--which captures local pixel movements and pose changes in the vicinity of target virtual sensor locations, and correlate it to the eventual activity recognition accuracy.

Human Activity Recognition

Generating Virtual On-body Accelerometer Data from Virtual Textual Descriptions for Human Activity Recognition

1 code implementation4 May 2023 Zikang Leng, Hyeokhyen Kwon, Thomas Plötz

We benchmarked our approach on three HAR datasets (RealWorld, PAMAP2, and USC-HAD) and demonstrate that the use of virtual IMU training data generated using our new approach leads to significantly improved HAR model performance compared to only using real IMU data.

Human Activity Recognition Motion Synthesis

A Feasibility Study on Indoor Localization and Multi-person Tracking Using Sparsely Distributed Camera Network with Edge Computing

1 code implementation8 May 2023 Hyeokhyen Kwon, Chaitra Hegde, Yashar Kiarashi, Venkata Siva Krishna Madala, Ratan Singh, ArjunSinh Nakum, Robert Tweedy, Leandro Miletto Tonetto, Craig M. Zimring, Matthew Doiron, Amy D. Rodriguez, Allan I. Levey, Gari D. Clifford

To this end, we deployed an end-to-end edge computing pipeline that utilizes multiple cameras to achieve localization, body orientation estimation and tracking of multiple individuals within a large therapeutic space spanning $1700m^2$, all while maintaining a strong focus on preserving privacy.

Edge-computing Human Detection +4

On the Benefit of Generative Foundation Models for Human Activity Recognition

no code implementations18 Oct 2023 Zikang Leng, Hyeokhyen Kwon, Thomas Plötz

In human activity recognition (HAR), the limited availability of annotated data presents a significant challenge.

Human Activity Recognition Motion Synthesis

IMUGPT 2.0: Language-Based Cross Modality Transfer for Sensor-Based Human Activity Recognition

1 code implementation1 Feb 2024 Zikang Leng, Amitrajit Bhattacharjee, Hrudhai Rajasekhar, Lizhe Zhang, Elizabeth Bruda, Hyeokhyen Kwon, Thomas Plötz

With the emergence of generative AI models such as large language models (LLMs) and text-driven motion synthesis models, language has become a promising source data modality as well as shown in proof of concepts such as IMUGPT.

Human Activity Recognition Motion Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.