Average Biased ReLU Based CNN Descriptor for Improved Face Retrieval

2 Apr 2018  ·  Shiv Ram Dubey, Soumendu Chakraborty ·

The convolutional neural networks (CNN), including AlexNet, GoogleNet, VGGNet, etc. extract features for many computer vision problems which are very discriminative. The trained CNN model over one dataset performs reasonably well whereas on another dataset of similar type the hand-designed feature descriptor outperforms the same trained CNN model. The Rectified Linear Unit (ReLU) layer discards some values in order to introduce the non-linearity. In this paper, it is proposed that the discriminative ability of deep image representation using trained model can be improved by Average Biased ReLU (AB-ReLU) at the last few layers. Basically, AB-ReLU improves the discriminative ability in two ways: 1) it exploits some of the discriminative and discarded negative information of ReLU and 2) it also neglects the irrelevant and positive information used in ReLU. The VGGFace model trained in MatConvNet over the VGG-Face dataset is used as the feature descriptor for face retrieval over other face datasets. The proposed approach is tested over six challenging, unconstrained and robust face datasets (PubFig, LFW, PaSC, AR, FERET and ExtYale) and also on a large scale face dataset (PolyUNIR) in retrieval framework. It is observed that the AB-ReLU outperforms the ReLU when used with a pre-trained VGGFace model over the face datasets. The validation error by training the network after replacing all ReLUs with AB-ReLUs is also observed to be favorable over each dataset. The AB-ReLU even outperforms the state-of-the-art activation functions, such as Sigmoid, ReLU, Leaky ReLU and Flexible ReLU over all seven face datasets.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods