Search Results for author: Shengtao Xiao

Found 4 papers, 0 papers with code

Integrated Face Analytics Networks through Cross-Dataset Hybrid Training

no code implementations16 Nov 2017 Jianshu Li, Shengtao Xiao, Fang Zhao, Jian Zhao, Jianan Li, Jiashi Feng, Shuicheng Yan, Terence Sim

Specifically, iFAN achieves an overall F-score of 91. 15% on the Helen dataset for face parsing, a normalized mean error of 5. 81% on the MTFL dataset for facial landmark localization and an accuracy of 45. 73% on the BNU dataset for emotion recognition with a single model.

Face Alignment Face Parsing +1

Recurrent 3D-2D Dual Learning for Large-Pose Facial Landmark Detection

no code implementations ICCV 2017 Shengtao Xiao, Jiashi Feng, Luoqi Liu, Xuecheng Nie, Wei Wang, Shuicheng Yan, Ashraf Kassim

To address these challenging issues, we introduce a novel recurrent 3D-2D dual learning model that alternatively performs 2D-based 3D face model refinement and 3D-to-2D projection based 2D landmark refinement to reliably reason about self-occluded landmarks, precisely capture the subtle landmark displacement and accurately detect landmarks even in presence of extremely large poses.

Face Model Facial Landmark Detection

Recurrently Target-Attending Tracking

no code implementations CVPR 2016 Zhen Cui, Shengtao Xiao, Jiashi Feng, Shuicheng Yan

The produced confidence maps from the RNNs are employed to adaptively regularize the learning of discriminative correlation filters by suppressing clutter background noises while making full use of the information from reliable parts.

Visual Tracking

Deep Recurrent Regression for Facial Landmark Detection

no code implementations30 Oct 2015 Hanjiang Lai, Shengtao Xiao, Yan Pan, Zhen Cui, Jiashi Feng, Chunyan Xu, Jian Yin, Shuicheng Yan

We propose a novel end-to-end deep architecture for face landmark detection, based on a deep convolutional and deconvolutional network followed by carefully designed recurrent network structures.

Facial Landmark Detection regression

Cannot find the paper you are looking for? You can Submit a new open access paper.