Search Results for author: Zheng Lian

Found 29 papers, 12 papers with code

HiCMAE: Hierarchical Contrastive Masked Autoencoder for Self-Supervised Audio-Visual Emotion Recognition

1 code implementation11 Jan 2024 Licai Sun, Zheng Lian, Bin Liu, JianHua Tao

Audio-Visual Emotion Recognition (AVER) has garnered increasing attention in recent years for its critical role in creating emotion-ware intelligent machines.

Contrastive Learning Dynamic Facial Expression Recognition +3

GPT-4V with Emotion: A Zero-shot Benchmark for Generalized Emotion Recognition

1 code implementation7 Dec 2023 Zheng Lian, Licai Sun, Haiyang Sun, Kang Chen, Zhuofan Wen, Hao Gu, Bin Liu, JianHua Tao

To bridge this gap, we present the quantitative evaluation results of GPT-4V on 21 benchmark datasets covering 6 tasks: visual sentiment analysis, tweet sentiment analysis, micro-expression recognition, facial emotion recognition, dynamic facial emotion recognition, and multimodal emotion recognition.

Facial Emotion Recognition Micro Expression Recognition +3

Learning Noise-Robust Joint Representation for Multimodal Emotion Recognition under Incomplete Data Scenarios

1 code implementation21 Sep 2023 Qi Fan, Haolin Zuo, Rui Liu, Zheng Lian, Guanglai Gao

This approach includes two pivotal components: firstly, a noise scheduler that adjusts the type and level of noise in the data to emulate various realistic incomplete situations.

Multimodal Emotion Recognition

MFAS: Emotion Recognition through Multiple Perspectives Fusion Architecture Search Emulating Human Cognition

no code implementations12 Jun 2023 Haiyang Sun, FuLin Zhang, Zheng Lian, Yingying Guo, Shilei Zhang

Additionally, considering that humans adjust their perception of emotional words in textual semantic based on certain cues present in speech, we design a novel search space and search for the optimal fusion strategy for the two types of information.

Quantization Speech Emotion Recognition

Pseudo Labels Regularization for Imbalanced Partial-Label Learning

no code implementations6 Mar 2023 Mingyu Xu, Zheng Lian

Partial-label learning (PLL) is an important branch of weakly supervised learning where the single ground truth resides in a set of candidate labels, while the research rarely considers the label imbalance.

Long-tail Learning Partial Label Learning +2

IRNet: Iterative Refinement Network for Noisy Partial Label Learning

1 code implementation9 Nov 2022 Zheng Lian, Mingyu Xu, Lan Chen, Licai Sun, Bin Liu, JianHua Tao

In this paper, we relax this assumption and focus on a more general problem, noisy PLL, where the ground-truth label may not exist in the candidate set.

Data Augmentation Partial Label Learning +1

Supporting Medical Relation Extraction via Causality-Pruned Semantic Dependency Forest

1 code implementation COLING 2022 Yifan Jin, Jiangmeng Li, Zheng Lian, Chengbo Jiao, Xiaohui Hu

However, the quality of the 1-best dependency tree for medical texts produced by an out-of-domain parser is relatively limited so that the performance of medical relation extraction method may degenerate.

Medical Relation Extraction Relation +1

Efficient Multimodal Transformer with Dual-Level Feature Restoration for Robust Multimodal Sentiment Analysis

1 code implementation16 Aug 2022 Licai Sun, Zheng Lian, Bin Liu, JianHua Tao

With the proliferation of user-generated online videos, Multimodal Sentiment Analysis (MSA) has attracted increasing attention recently.

Multimodal Sentiment Analysis Representation Learning

Two-Aspect Information Fusion Model For ABAW4 Multi-task Challenge

no code implementations23 Jul 2022 Haiyang Sun, Zheng Lian, Bin Liu, JianHua Tao, Licai Sun, Cong Cai

In this paper, we propose the solution to the Multi-Task Learning (MTL) Challenge of the 4th Affective Behavior Analysis in-the-wild (ABAW) competition.

Multi-Task Learning Vocal Bursts Valence Prediction

GCNet: Graph Completion Network for Incomplete Multimodal Learning in Conversation

1 code implementation4 Mar 2022 Zheng Lian, Lan Chen, Licai Sun, Bin Liu, JianHua Tao

To this end, we propose a novel framework for incomplete multimodal learning in conversations, called "Graph Complete Network (GCNet)", filling the gap of existing works.

Cross Modification Attention Based Deliberation Model for Image Captioning

no code implementations17 Sep 2021 Zheng Lian, Yanan Zhang, Haichang Li, Rui Wang, Xiaohui Hu

The conventional encoder-decoder framework for image captioning generally adopts a single-pass decoding process, which predicts the target descriptive sentence word by word in temporal order.

Decoder Descriptive +2

BEHAVIOR: Benchmark for Everyday Household Activities in Virtual, Interactive, and Ecological Environments

no code implementations6 Aug 2021 Sanjana Srivastava, Chengshu Li, Michael Lingelbach, Roberto Martín-Martín, Fei Xia, Kent Vainio, Zheng Lian, Cem Gokmen, Shyamal Buch, C. Karen Liu, Silvio Savarese, Hyowon Gweon, Jiajun Wu, Li Fei-Fei

We introduce BEHAVIOR, a benchmark for embodied AI with 100 activities in simulation, spanning a range of everyday household chores such as cleaning, maintenance, and food preparation.

Conversational Emotion Analysis via Attention Mechanisms

no code implementations24 Oct 2019 Zheng Lian, Jian-Hua Tao, Bin Liu, Jian Huang

Different from the emotion recognition in individual utterances, we propose a multimodal learning framework using relation and dependencies among the utterances for conversational emotion analysis.

Emotion Recognition

Domain adversarial learning for emotion recognition

no code implementations24 Oct 2019 Zheng Lian, Jian-Hua Tao, Bin Liu, Jian Huang

The secondary task is to learn a common representation where speaker identities can not be distinguished.

Emotion Recognition

Expression Analysis Based on Face Regions in Read-world Conditions

no code implementations23 Oct 2019 Zheng Lian, Ya Li, Jian-Hua Tao, Jian Huang, Ming-Yue Niu

To sum up, the contributions of this paper lie in two areas: 1) We visualize concerned areas of human faces in emotion recognition; 2) We analyze the contribution of different face areas to different emotions in real-world conditions through experimental analysis.

Facial Emotion Recognition Facial Expression Recognition +1

Speech Emotion Recognition via Contrastive Loss under Siamese Networks

no code implementations23 Oct 2019 Zheng Lian, Ya Li, Jian-Hua Tao, Jian Huang

It outperforms the baseline system that is optimized without the contrastive loss function with 1. 14% and 2. 55% in the weighted accuracy and the unweighted accuracy, respectively.

feature selection Speech Emotion Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.