Search Results for author: Shuhong Chen

Found 12 papers, 4 papers with code

Improving the Perceptual Quality of 2D Animation Interpolation

1 code implementation24 Nov 2021 Shuhong Chen, Matthias Zwicker

Traditional 2D animation is labor-intensive, often requiring animators to manually draw twelve illustrations per second of movement.


Transfer Learning for Pose Estimation of Illustrated Characters

1 code implementation4 Aug 2021 Shuhong Chen, Matthias Zwicker

Likewise, a pose estimator for the illustrated character domain would provide a valuable prior for assistive content creation tasks, such as reference pose retrieval and automatic character animation.

Activity Recognition Pose Estimation +3

Neural Radiosity

1 code implementation26 May 2021 Saeed Hadadan, Shuhong Chen, Matthias Zwicker

We introduce Neural Radiosity, an algorithm to solve the rendering equation by minimizing the norm of its residual similar as in traditional radiosity techniques.

Using Sampling Strategy to Assist Consensus Sequence Analysis

no code implementations19 Aug 2020 Zhichao Xu, Shuhong Chen

Consensus Sequences of event logs are often used in process mining to quickly grasp the core sequence of events to be performed in a process, or to represent the backbone of the process for doing other analyses.

Hybrid Attention based Multimodal Network for Spoken Language Classification

no code implementations COLING 2018 Yue Gu, Kangning Yang, Shiyu Fu, Shuhong Chen, Xinyu Li, Ivan Marsic

The proposed hybrid attention architecture helps the system focus on learning informative representations for both modality-specific feature extraction and model fusion.

Classification Emotion Recognition +4

Multimodal Affective Analysis Using Hierarchical Attention Strategy with Word-Level Alignment

no code implementations ACL 2018 Yue Gu, Kangning Yang, Shiyu Fu, Shuhong Chen, Xinyu Li, Ivan Marsic

Multimodal affective computing, learning to recognize and interpret human affects and subjective information from multiple data sources, is still challenging because: (i) it is hard to extract informative features to represent human affects from heterogeneous inputs; (ii) current fusion strategies only fuse different modalities at abstract level, ignoring time-dependent interactions between modalities.

Deep Multimodal Learning for Emotion Recognition in Spoken Language

no code implementations22 Feb 2018 Yue Gu, Shuhong Chen, Ivan Marsic

In this paper, we present a novel deep multimodal framework to predict human emotions based on sentence-level spoken language.

Emotion Recognition

Process-oriented Iterative Multiple Alignment for Medical Process Mining

no code implementations16 Sep 2017 Shuhong Chen, Sen yang, Moliang Zhou, Randall S. Burd, Ivan Marsic

We applied PIMA to analyzing medical workflow data, showing how iterative alignment can better represent the data and facilitate the extraction of insights from data visualization.

Data Visualization

Progress Estimation and Phase Detection for Sequential Processes

no code implementations28 Feb 2017 Xinyu Li, Yanyi Zhang, Jianyu Zhang, Yueyang Chen, Shuhong Chen, Yue Gu, Moliang Zhou, Richard A. Farneth, Ivan Marsic, Randall S. Burd

For the Olympic swimming dataset, our system achieved an accuracy of 88%, an F1-score of 0. 58, a completeness estimation error of 6. 3% and a remaining-time estimation error of 2. 9 minutes.

Activity Recognition Multimodal Deep Learning

Concurrent Activity Recognition with Multimodal CNN-LSTM Structure

no code implementations6 Feb 2017 Xinyu Li, Yanyi Zhang, Jianyu Zhang, Shuhong Chen, Ivan Marsic, Richard A. Farneth, Randall S. Burd

Our system is the first to address the concurrent activity recognition with multisensory data using a single model, which is scalable, simple to train and easy to deploy.

Concurrent Activity Recognition Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.