Search Results for author: Wanzeng Kong

Found 8 papers, 2 papers with code

Enhanced Coherence-Aware Network with Hierarchical Disentanglement for Aspect-Category Sentiment Analysis

no code implementations15 Mar 2024 Jin Cui, Fumiyo Fukumoto, Xinfeng Wang, Yoshimi Suzuki, Jiyi Li, Noriko Tomuro, Wanzeng Kong

To address the issue of multiple aspect categories and sentiment entanglement, we propose a hierarchical disentanglement module to extract distinct categories and sentiment features.

Aspect Category Sentiment Analysis Disentanglement +2

AEGIS-Net: Attention-guided Multi-Level Feature Aggregation for Indoor Place Recognition

1 code implementation15 Dec 2023 Yuhang Ming, Jian Ma, Xingrui Yang, Weichen Dai, Yong Peng, Wanzeng Kong

We evaluate our AEGIS-Net on the ScanNetPR dataset and compare its performance with a pre-deep-learning feature-based method and five state-of-the-art deep-learning-based methods.

Semantic Segmentation

InterMulti:Multi-view Multimodal Interactions with Text-dominated Hierarchical High-order Fusion for Emotion Analysis

no code implementations20 Dec 2022 Feng Qiu, Wanzeng Kong, Yu Ding

Humans are sophisticated at reading interlocutors' emotions from multimodal signals, such as speech contents, voice tones and facial expressions.

Emotion Recognition

EffMulti: Efficiently Modeling Complex Multimodal Interactions for Emotion Analysis

no code implementations16 Dec 2022 Feng Qiu, Chengyang Xie, Yu Ding, Wanzeng Kong

In this paper, we design three kinds of multimodal latent representations to refine the emotion analysis process and capture complex multimodal interactions from different views, including a intact three-modal integrating representation, a modality-shared representation, and three modality-individual representations.

Emotion Recognition

CTFN: Hierarchical Learning for Multimodal Sentiment Analysis Using Coupled-Translation Fusion Network

no code implementations ACL 2021 Jiajia Tang, Kang Li, Xuanyu Jin, Andrzej Cichocki, Qibin Zhao, Wanzeng Kong

In this work, the coupled-translation fusion network (CTFN) is firstly proposed to model bi-direction interplay via couple learning, ensuring the robustness in respect to missing modalities.

Multimodal Sentiment Analysis Translation

BCGGAN: Ballistocardiogram artifact removal in simultaneous EEG-fMRI using generative adversarial network

no code implementations3 Nov 2020 Guang Lin, Jianhai Zhang, Yuxi Liu, Tianyang Gao, Wanzeng Kong, Xu Lei, Tao Qiu

Due to its advantages of high temporal and spatial resolution, the technology of simultaneous electroencephalogram-functional magnetic resonance imaging (EEG-fMRI) acquisition and analysis has attracted much attention, and has been widely used in various research fields of brain science.

EEG Electroencephalogram (EEG) +1

Transfer Learning for Motor Imagery Based Brain-Computer Interfaces: A Complete Pipeline

1 code implementation3 Jul 2020 Dongrui Wu, Xue Jiang, Ruimin Peng, Wanzeng Kong, Jian Huang, Zhigang Zeng

Transfer learning (TL) has been widely used in motor imagery (MI) based brain-computer interfaces (BCIs) to reduce the calibration effort for a new subject, and demonstrated promising performance.

Classification EEG +4

Cannot find the paper you are looking for? You can Submit a new open access paper.