1 code implementation • CVPR 2023 • Shuhong Chen, Kevin Zhang, Yichun Shi, Heng Wang, Yiheng Zhu, Guoxian Song, Sizhe An, Janus Kristjansson, Xiao Yang, Matthias Zwicker
We propose PAniC-3D, a system to reconstruct stylized 3D character heads directly from illustrated (p)ortraits of (ani)me (c)haracters.
1 code implementation • 24 Nov 2021 • Shuhong Chen, Matthias Zwicker
Traditional 2D animation is labor-intensive, often requiring animators to manually draw twelve illustrations per second of movement.
1 code implementation • 4 Aug 2021 • Shuhong Chen, Matthias Zwicker
Likewise, a pose estimator for the illustrated character domain would provide a valuable prior for assistive content creation tasks, such as reference pose retrieval and automatic character animation.
1 code implementation • 26 May 2021 • Saeed Hadadan, Shuhong Chen, Matthias Zwicker
We introduce Neural Radiosity, an algorithm to solve the rendering equation by minimizing the norm of its residual similar as in traditional radiosity techniques.
no code implementations • 19 Aug 2020 • Zhichao Xu, Shuhong Chen
Consensus Sequences of event logs are often used in process mining to quickly grasp the core sequence of events to be performed in a process, or to represent the backbone of the process for doing other analyses.
no code implementations • 6 Dec 2018 • Yanyi Zhang, Xinyu Li, Kaixiang Huang, Yehan Wang, Shuhong Chen, Ivan Marsic
We present a system for concurrent activity recognition.
no code implementations • COLING 2018 • Yue Gu, Kangning Yang, Shiyu Fu, Shuhong Chen, Xinyu Li, Ivan Marsic
The proposed hybrid attention architecture helps the system focus on learning informative representations for both modality-specific feature extraction and model fusion.
no code implementations • ACL 2018 • Yue Gu, Kangning Yang, Shiyu Fu, Shuhong Chen, Xinyu Li, Ivan Marsic
Multimodal affective computing, learning to recognize and interpret human affects and subjective information from multiple data sources, is still challenging because: (i) it is hard to extract informative features to represent human affects from heterogeneous inputs; (ii) current fusion strategies only fuse different modalities at abstract level, ignoring time-dependent interactions between modalities.
no code implementations • 22 Feb 2018 • Yue Gu, Shuhong Chen, Ivan Marsic
In this paper, we present a novel deep multimodal framework to predict human emotions based on sentence-level spoken language.
no code implementations • 16 Sep 2017 • Shuhong Chen, Sen yang, Moliang Zhou, Randall S. Burd, Ivan Marsic
We applied PIMA to analyzing medical workflow data, showing how iterative alignment can better represent the data and facilitate the extraction of insights from data visualization.
no code implementations • 28 Feb 2017 • Xinyu Li, Yanyi Zhang, Jianyu Zhang, Yueyang Chen, Shuhong Chen, Yue Gu, Moliang Zhou, Richard A. Farneth, Ivan Marsic, Randall S. Burd
For the Olympic swimming dataset, our system achieved an accuracy of 88%, an F1-score of 0. 58, a completeness estimation error of 6. 3% and a remaining-time estimation error of 2. 9 minutes.
no code implementations • 6 Feb 2017 • Xinyu Li, Yanyi Zhang, Jianyu Zhang, Shuhong Chen, Ivan Marsic, Richard A. Farneth, Randall S. Burd
Our system is the first to address the concurrent activity recognition with multisensory data using a single model, which is scalable, simple to train and easy to deploy.