Search Results for author: Lijun Yin

Found 18 papers, 1 papers with code

Weakly-Supervised Text-driven Contrastive Learning for Facial Behavior Understanding

no code implementations ICCV 2023 Xiang Zhang, Taoyue Wang, Xiaotian Li, Huiyuan Yang, Lijun Yin

This is because such pairs inevitably encode the subject-ID information, and the randomly constructed pairs may push similar facial images away due to the limited number of subjects in facial behavior datasets.

Contrastive Learning Facial Expression Recognition

A Transformer-based Deep Learning Algorithm to Auto-record Undocumented Clinical One-Lung Ventilation Events

no code implementations16 Feb 2023 Zhihua Li, Alexander Nagrebetsky, Sylvia Ranjeva, Nan Bi, Dianbo Liu, Marcos F. Vidal Melo, Timothy Houle, Lijun Yin, Hao Deng

We hypothesized that available intraoperative mechanical ventilation and physiological time-series data combined with other clinical events could be used to accurately predict missing start and end times of OLV.

Time Series Time Series Analysis

Knowledge-Spreader: Learning Semi-Supervised Facial Action Dynamics by Consistifying Knowledge Granularity

no code implementations ICCV 2023 Xiaotian Li, Xiang Zhang, Taoyue Wang, Lijun Yin

By formulating SSL as a Progressive Knowledge Distillation (PKD) problem, we aim to infer cross-domain information, specifically from spatial to temporal domains, by consistifying knowledge granularity within Teacher-Students Network.

Knowledge Distillation

Knowledge-Spreader: Learning Facial Action Unit Dynamics with Extremely Limited Labels

no code implementations30 Mar 2022 Xiaotian Li, Xiang Zhang, Taoyue Wang, Lijun Yin

Recent studies on the automatic detection of facial action unit (AU) have extensively relied on large-sized annotations.

Out-of-Distribution Generalization

An EEG-Based Multi-Modal Emotion Database with Both Posed and Authentic Facial Actions for Emotion Analysis

no code implementations29 Mar 2022 Xiaotian Li, Xiang Zhang, Huiyuan Yang, Wenna Duan, Weiying Dai, Lijun Yin

Emotion is an experience associated with a particular pattern of physiological activity along with different physiological, behavioral and cognitive changes.

Cultural Vocal Bursts Intensity Prediction EEG +1

Your "Attention" Deserves Attention: A Self-Diversified Multi-Channel Attention for Facial Action Analysis

no code implementations23 Mar 2022 Xiaotian Li, Zhihua Li, Huiyuan Yang, Geran Zhao, Lijun Yin

In this paper, we propose a compact model to enhance the representational and focusing power of neural attention maps and learn the "inter-attention" correlation for refined attention maps, which we term the "Self-Diversified Multi-Channel Attention Network (SMA-Net)".

Action Analysis Facial Expression Recognition +1

Multi-Modal Learning for AU Detection Based on Multi-Head Fused Transformers

no code implementations22 Mar 2022 Xiang Zhang, Lijun Yin

In this paper, we propose a novel end-to-end Multi-Head Fused Transformer (MFT) method for AU detection, which learns AU encoding features representation from different modalities by transformer encoder and fuses modalities by another fusion transformer module.

Action Unit Detection

The First Vision For Vitals (V4V) Challenge for Non-Contact Video-Based Physiological Estimation

1 code implementation22 Sep 2021 Ambareesh Revanur, Zhihua Li, Umur A. Ciftci, Lijun Yin, Laszlo A. Jeni

Telehealth has the potential to offset the high demand for help during public health emergencies, such as the COVID-19 pandemic.

Exploiting Semantic Embedding and Visual Feature for Facial Action Unit Detection

no code implementations CVPR 2021 Huiyuan Yang, Lijun Yin, Yi Zhou, Jiuxiang Gu

The learned AU semantic embeddings are then used as guidance for the generation of attention maps through a cross-modality attention network.

Action Unit Detection Facial Action Unit Detection +1

How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals

no code implementations26 Aug 2020 Umur Aybars Ciftci, Ilke Demir, Lijun Yin

Our results indicate that our approach can detect fake videos with 97. 29% accuracy, and the source model with 93. 39% accuracy.

Video Generation

Facial Expression Recognition by De-Expression Residue Learning

no code implementations CVPR 2018 Huiyuan Yang, Umur Ciftci, Lijun Yin

We call this procedure de-expression because the expressive information is filtered out by the generative model; however, the expressive information is still recorded in the intermediate layers.

Facial Expression Recognition Facial Expression Recognition (FER)

FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge

no code implementations14 Feb 2017 Michel F. Valstar, Enrique Sánchez-Lozano, Jeffrey F. Cohn, László A. Jeni, Jeffrey M. Girard, Zheng Zhang, Lijun Yin, Maja Pantic

The FG 2017 Facial Expression Recognition and Analysis challenge (FERA 2017) extends FERA 2015 to the estimation of Action Units occurrence and intensity under different camera views.

Benchmarking Facial Action Unit Detection +4

EAC-Net: A Region-based Deep Enhancing and Cropping Approach for Facial Action Unit Detection

no code implementations9 Feb 2017 Wei Li, Farnaz Abtahi, Zhigang Zhu, Lijun Yin

For the enhancing layers, we designed an attention map based on facial landmark features and applied it to a pretrained neural network to conduct enhanced learning (The E-Net).

Action Unit Detection Facial Action Unit Detection

Self-Adaptive Matrix Completion for Heart Rate Estimation From Face Videos Under Realistic Conditions

no code implementations CVPR 2016 Sergey Tulyakov, Xavier Alameda-Pineda, Elisa Ricci, Lijun Yin, Jeffrey F. Cohn, Nicu Sebe

Recent studies in computer vision have shown that, while practically invisible to a human observer, skin color changes due to blood flow can be captured on face videos and, surprisingly, be used to estimate the heart rate (HR).

Heart rate estimation Matrix Completion

Cannot find the paper you are looking for? You can Submit a new open access paper.