no code implementations • ICCV 2023 • Xiang Zhang, Taoyue Wang, Xiaotian Li, Huiyuan Yang, Lijun Yin
This is because such pairs inevitably encode the subject-ID information, and the randomly constructed pairs may push similar facial images away due to the limited number of subjects in facial behavior datasets.
no code implementations • 16 Feb 2023 • Zhihua Li, Alexander Nagrebetsky, Sylvia Ranjeva, Nan Bi, Dianbo Liu, Marcos F. Vidal Melo, Timothy Houle, Lijun Yin, Hao Deng
We hypothesized that available intraoperative mechanical ventilation and physiological time-series data combined with other clinical events could be used to accurately predict missing start and end times of OLV.
no code implementations • ICCV 2023 • Xiaotian Li, Taoyue Wang, Geran Zhao, Xiang Zhang, Xi Kang, Lijun Yin
Diverse visual stimuli can evoke various human affective states, which are usually manifested in an individual's muscular actions and facial expressions.
no code implementations • ICCV 2023 • Xiaotian Li, Xiang Zhang, Taoyue Wang, Lijun Yin
By formulating SSL as a Progressive Knowledge Distillation (PKD) problem, we aim to infer cross-domain information, specifically from spatial to temporal domains, by consistifying knowledge granularity within Teacher-Students Network.
no code implementations • ICCV 2023 • Zhihua Li, Lijun Yin
This is followed by a co-rectification technique designed to mitigate the adverse effects of noisy pseudo-labels.
no code implementations • 25 Sep 2022 • Xiang Zhang, Huiyuan Yang, Taoyue Wang, Xiaotian Li, Lijun Yin
Recent studies have focused on utilizing multi-modal data to develop robust models for facial Action Unit (AU) detection.
no code implementations • 30 Mar 2022 • Xiaotian Li, Xiang Zhang, Taoyue Wang, Lijun Yin
Recent studies on the automatic detection of facial action unit (AU) have extensively relied on large-sized annotations.
no code implementations • 29 Mar 2022 • Xiaotian Li, Xiang Zhang, Huiyuan Yang, Wenna Duan, Weiying Dai, Lijun Yin
Emotion is an experience associated with a particular pattern of physiological activity along with different physiological, behavioral and cognitive changes.
no code implementations • 23 Mar 2022 • Xiaotian Li, Zhihua Li, Huiyuan Yang, Geran Zhao, Lijun Yin
In this paper, we propose a compact model to enhance the representational and focusing power of neural attention maps and learn the "inter-attention" correlation for refined attention maps, which we term the "Self-Diversified Multi-Channel Attention Network (SMA-Net)".
no code implementations • 22 Mar 2022 • Xiang Zhang, Lijun Yin
In this paper, we propose a novel end-to-end Multi-Head Fused Transformer (MFT) method for AU detection, which learns AU encoding features representation from different modalities by transformer encoder and fuses modalities by another fusion transformer module.
1 code implementation • 22 Sep 2021 • Ambareesh Revanur, Zhihua Li, Umur A. Ciftci, Lijun Yin, Laszlo A. Jeni
Telehealth has the potential to offset the high demand for help during public health emergencies, such as the COVID-19 pandemic.
no code implementations • CVPR 2021 • Huiyuan Yang, Lijun Yin, Yi Zhou, Jiuxiang Gu
The learned AU semantic embeddings are then used as guidance for the generation of attention maps through a cross-modality attention network.
no code implementations • 26 Aug 2020 • Umur Aybars Ciftci, Ilke Demir, Lijun Yin
Our results indicate that our approach can detect fake videos with 97. 29% accuracy, and the source model with 93. 39% accuracy.
no code implementations • CVPR 2018 • Huiyuan Yang, Umur Ciftci, Lijun Yin
We call this procedure de-expression because the expressive information is filtered out by the generative model; however, the expressive information is still recorded in the intermediate layers.
Facial Expression Recognition
Facial Expression Recognition (FER)
no code implementations • 14 Feb 2017 • Michel F. Valstar, Enrique Sánchez-Lozano, Jeffrey F. Cohn, László A. Jeni, Jeffrey M. Girard, Zheng Zhang, Lijun Yin, Maja Pantic
The FG 2017 Facial Expression Recognition and Analysis challenge (FERA 2017) extends FERA 2015 to the estimation of Action Units occurrence and intensity under different camera views.
no code implementations • 9 Feb 2017 • Wei Li, Farnaz Abtahi, Zhigang Zhu, Lijun Yin
For the enhancing layers, we designed an attention map based on facial landmark features and applied it to a pretrained neural network to conduct enhanced learning (The E-Net).
no code implementations • CVPR 2016 • Sergey Tulyakov, Xavier Alameda-Pineda, Elisa Ricci, Lijun Yin, Jeffrey F. Cohn, Nicu Sebe
Recent studies in computer vision have shown that, while practically invisible to a human observer, skin color changes due to blood flow can be captured on face videos and, surprisingly, be used to estimate the heart rate (HR).
no code implementations • CVPR 2016 • Zheng Zhang, Jeff M. Girard, Yue Wu, Xing Zhang, Peng Liu, Umur Ciftci, Shaun Canavan, Michael Reale, Andy Horowitz, Huiyuan Yang, Jeffrey F. Cohn, Qiang Ji, Lijun Yin
The corpus further includes derived features from 3D, 2D, and IR (infrared) sensors and baseline results for facial expression and action unit detection.