Paper

Learning Robust Deep Face Representation

With the development of convolution neural network, more and more researchers focus their attention on the advantage of CNN for face recognition task. In this paper, we propose a deep convolution network for learning a robust face representation. The deep convolution net is constructed by 4 convolution layers, 4 max pooling layers and 2 fully connected layers, which totally contains about 4M parameters. The Max-Feature-Map activation function is used instead of ReLU because the ReLU might lead to the loss of information due to the sparsity while the Max-Feature-Map can get the compact and discriminative feature vectors. The model is trained on CASIA-WebFace dataset and evaluated on LFW dataset. The result on LFW achieves 97.77% on unsupervised setting for single net.

Results in Papers With Code
(↓ scroll down to see all results)