Search Results for author: Shangfei Wang

Found 21 papers, 0 papers with code

1DFormer: a Transformer Architecture Learning 1D Landmark Representations for Facial Landmark Tracking

no code implementations1 Nov 2023 Shi Yin, Shijie Huan, Shangfei Wang, Jinshui Hu, Tao Guo, Bing Yin, BaoCai Yin, Cong Liu

For temporal modeling, we propose a recurrent token mixing mechanism, an axis-landmark-positional embedding mechanism, as well as a confidence-enhanced multi-head attention mechanism to adaptively and robustly embed long-term landmark dynamics into their 1D representations; for structure modeling, we design intra-group and inter-group structure modeling mechanisms to encode the component-level as well as global-level facial structure patterns as a refinement for the 1D representations of landmarks through token communications in the spatial dimension via 1D convolutional layers.

Landmark Tracking

MEDIC: A Multimodal Empathy Dataset in Counseling

no code implementations4 May 2023 Zhou'an_Zhu, Xin Li, Jicai Pan, Yufei Xiao, Yanan Chang, Feiyi Zheng, Shangfei Wang

We also propose three labels (i. e., expression of experience, emotional reaction, and cognitive reaction) to describe the degree of empathy between counselors and their clients.

Representation Learning through Multimodal Attention and Time-Sync Comments for Affective Video Content Analysis

no code implementations ACM MM22 2022 Jicai Pan, Shangfei Wang, Lin Fang

These self-supervised pre-training tasks prompt the fusion module to perform representation learning on segments including TSC, thus capturing more temporal affective patterns.

 Ranked #1 on Video Emotion Recognition on Ekman6 (using extra training data)

Representation Learning Video Emotion Recognition

Multi-Task Learning for Emotion Descriptors Estimation at the fourth ABAW Challenge

no code implementations20 Jul 2022 Yanan Chang, Yi Wu, Xiangyu Miao, Jiahe Wang, Shangfei Wang

The 4th competition on affective behavior analysis in the wild (ABAW) provided images with valence/arousal, expression and action unit labels.

Multi-Task Learning

Hand-Assisted Expression Recognition Method from Synthetic Images at the Fourth ABAW Challenge

no code implementations20 Jul 2022 Xiangyu Miao, Jiahe Wang, Yanan Chang, Yi Wu, Shangfei Wang

Learning from synthetic images plays an important role in facial expression recognition task due to the difficulties of labeling the real images, and it is challenging because of the gap between the synthetic images and real images.

Facial Expression Recognition Facial Expression Recognition (FER)

Knowledge-Driven Self-Supervised Representation Learning for Facial Action Unit Recognition

no code implementations CVPR 2022 Yanan Chang, Shangfei Wang

To remedy this, we utilize AU labeling rules defined by the Facial Action Coding System (FACS) to design a novel knowledge-driven self-supervised representation learning framework for AU recognition.

Contrastive Learning Facial Action Unit Detection +1

Exploring Adversarial Learning for Deep Semi-Supervised Facial Action Unit Recognition

no code implementations4 Jun 2021 Shangfei Wang, Yanan Chang, Guozhu Peng, Bowen Pan

Specifically, the proposed deep semi-supervised AU recognition approach consists of a deep recognition network and a discriminator D. The deep recognition network R learns facial representations from large-scale facial images and AU classifiers from limited ground truth AU labels.

Facial Action Unit Detection

Pose-aware Adversarial Domain Adaptation for Personalized Facial Expression Recognition

no code implementations12 Jul 2020 Guang Liang, Shangfei Wang, Can Wang

The first aims to learn pose- and expression-related feature representations in the source domain and adapt both feature distributions to that of the target domain by imposing adversarial learning.

Disentanglement Domain Adaptation +2

Attentive One-Dimensional Heatmap Regression for Facial Landmark Detection and Tracking

no code implementations5 Apr 2020 Shi Yin, Shangfei Wang, Xiaoping Chen, Enhong Chen

These 1D heatmaps reduce spatial complexity significantly compared to current heatmap regression methods, which use 2D heatmaps to represent the joint distributions of x and y coordinates.

Face Alignment Facial Landmark Detection +3

Multiple Face Analyses through Adversarial Learning

no code implementations18 Nov 2019 Shangfei Wang, Shi Yin, Longfei Hao, Guang Liang

Through multi-task learning mechanism, the recognition network explores the dependencies among multiple face analysis tasks, such as facial landmark localization, head pose estimation, gender recognition and face attribute estimation from image representation-level.

Attribute Face Alignment +2

Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey

no code implementations3 Oct 2019 Sicheng Zhao, Shangfei Wang, Mohammad Soleymani, Dhiraj Joshi, Qiang Ji

Affective computing (AC) of these data can help to understand human behaviors and enable wide applications.

KDSL: a Knowledge-Driven Supervised Learning Framework for Word Sense Disambiguation

no code implementations28 Aug 2018 Shi Yin, Yi Zhou, Chenguang Li, Shangfei Wang, Jianmin Ji, Xiaoping Chen, Ruili Wang

We propose KDSL, a new word sense disambiguation (WSD) framework that utilizes knowledge to automatically generate sense-labeled data for supervised learning.

Word Sense Disambiguation

Weakly Supervised Facial Action Unit Recognition Through Adversarial Training

no code implementations CVPR 2018 Guozhu Peng, Shangfei Wang

Then we propose a weakly supervised AU recognition method via an adversarial process, in which we simultaneously train two models: a recognition model R, which learns AU classifiers, and a discrimination model D, which estimates the probability that AU labels generated from domain knowledge rather than the recognized AU labels from R. The training procedure for R maximizes the probability of D making a mistake.

Facial Action Unit Detection

Deep Facial Action Unit Recognition From Partially Labeled Data

no code implementations ICCV 2017 Shan Wu, Shangfei Wang, Bowen Pan, Qiang Ji

To address this, we propose a deep facial action unit recognition approach learning from partially AU-labeled data.

Facial Action Unit Detection

A Multimodal Deep Regression Bayesian Network for Affective Video Content Analyses

no code implementations ICCV 2017 Quan Gan, Shangfei Wang, Longfei Hao, Qiang Ji

After that, a joint representation is extracted from the top layers of the two deep networks, and thus captures the high order dependencies between visual modality and audio modality.


Learning with Privileged Information for Multi-Label Classification

no code implementations29 Mar 2017 Shiyu Chen, Shangfei Wang, Tanfang Chen, Xiaoxiao Shi

In this paper, we propose a novel approach for learning multi-label classifiers with the help of privileged information.

Action Unit Detection Classification +5

Cannot find the paper you are looking for? You can Submit a new open access paper.