The threat of 3D masks to face recognition systems is increasingly serious and has been widely concerned by researchers.
A faster version of PiDiNet with less than 0. 1M parameters can still achieve comparable performance among state of the arts with 200 FPS.
We introduce a new dataset for the emotional artificial intelligence research: identity-free video dataset for Micro-Gesture Understanding and Emotion analysis (iMiGUE).
Face anti-spoofing (FAS) has lately attracted increasing attention due to its vital role in securing face recognition systems from presentation attacks (PAs).
The framework is able to capture both local and long-range dependencies via the proposed attention mechanism for the learned appearance representations, which are further enriched by temporally attended physiological cues (remote photoplethysmography, rPPG) that are recovered from videos in the auxiliary task.
In this paper, we propose two Cross Central Difference Convolutions (C-CDC), which exploit the difference of the center and surround sparse local features from the horizontal/vertical and diagonal directions, respectively.
3D mask face presentation attack detection (PAD) plays a vital role in securing face recognition systems from emergent 3D mask attacks.
no code implementations • 13 Apr 2021 • Ajian Liu, Chenxu Zhao, Zitong Yu, Jun Wan, Anyang Su, Xing Liu, Zichang Tan, Sergio Escalera, Junliang Xing, Yanyan Liang, Guodong Guo, Zhen Lei, Stan Z. Li, Du Zhang
To bridge the gap to real-world applications, we introduce a largescale High-Fidelity Mask dataset, namely CASIA-SURF HiFiMask (briefly HiFiMask).
The proposed method uses data acquired in the scatter window to reconstruct an initial estimate of the attenuation map using a physics-based approach.
Face anti-spoofing (FAS) plays a vital role in securing face recognition systems from the presentation attacks (PAs).
Face anti-spoofing (FAS) plays a vital role in securing face recognition systems.
Gesture recognition has attracted considerable attention owing to its great potential in applications.
To address the problem of training on small datasets for action recognition tasks, most prior works are either based on a large number of training samples or require pre-trained models transferred from other large datasets to tackle overfitting problems.
Remote physiological measurements, e. g., remote photoplethysmography (rPPG) based heart rate (HR), heart rate variability (HRV) and respiration frequency (RF) measuring, are playing more and more important roles under the application scenarios where contact measurement is inconvenient or impossible.
In this paper we rephrase face anti-spoofing as a material recognition problem and combine it with classical human material perception , intending to extract discriminative and robust features for FAS.
Remote photoplethysmography (rPPG), which aims at measuring heart activities without any contact, has great potential in many applications (e. g., remote healthcare).
Face anti-spoofing (FAS) plays a vital role in securing face recognition systems from presentation attacks.
Depth supervised learning has been proven as one of the most effective methods for face anti-spoofing.
Here we propose a novel frame level FAS method based on Central Difference Convolution (CDC), which is able to capture intrinsic detailed patterns via aggregating both intensity and gradient information.
The method includes two parts: 1) a Spatio-Temporal Video Enhancement Network (STVEN) for video enhancement, and 2) an rPPG network (rPPGNet) for rPPG signal recovery.
Recent studies demonstrated that the average heart rate (HR) can be measured from facial videos based on non-contact remote photoplethysmography (rPPG).
Therefore, we define face anti-spoofing as a zero- and few-shot learning problem.
Deep part-based methods in recent literature have revealed the great potential of learning local part-level representation for pedestrian image in the task of person re-identification.