Search Results for author: Kangning Yang

Found 2 papers, 0 papers with code

Hybrid Attention based Multimodal Network for Spoken Language Classification

no code implementations COLING 2018 Yue Gu, Kangning Yang, Shiyu Fu, Shuhong Chen, Xinyu Li, Ivan Marsic

The proposed hybrid attention architecture helps the system focus on learning informative representations for both modality-specific feature extraction and model fusion.

Classification Emotion Recognition +4

Multimodal Affective Analysis Using Hierarchical Attention Strategy with Word-Level Alignment

no code implementations ACL 2018 Yue Gu, Kangning Yang, Shiyu Fu, Shuhong Chen, Xinyu Li, Ivan Marsic

Multimodal affective computing, learning to recognize and interpret human affects and subjective information from multiple data sources, is still challenging because: (i) it is hard to extract informative features to represent human affects from heterogeneous inputs; (ii) current fusion strategies only fuse different modalities at abstract level, ignoring time-dependent interactions between modalities.

Cannot find the paper you are looking for? You can Submit a new open access paper.