We present a novel framework to exploit privileged information for recognition which is provided only during the training phase.
In this work, we present a practical approach to the problem of facial landmark detection.
In this paper, we present an attribute-guided deep coupled learning framework to address the problem of matching polarimetric thermal face photos against a gallery of visible faces.
These approaches usually fail to model domain-specific information which has no representation in the target domain.
This paper describes the Style and Content Disentangled GAN (SC-GAN), a new unsupervised algorithm for training GANs that learns disentangled style and content representations of the data.
The proposed Attribute-Assisted Deep Con- volutional Neural Network (AADCNN) method exploits the facial attributes and leverages the loss functions from the facial attributes identification and face verification tasks in order to learn rich discriminative features in a common em- bedding subspace.
In this paper a novel cross-device text-independent speaker verification architecture is proposed.
We achieved the rank-10 accuracy of 88. 02\% on the IIIT-Delhi latent fingerprint database for the task of latent-to-latent matching and rank-50 accuracy of 70. 89\% on the IIIT-Delhi MOLF database for the task of latent-to-sensor matching.
we propose a coupled deep neural network architecture which leverages relatively large visible and thermal datasets to overcome the problem of overfitting and eventually we train it by a polarimetric thermal face dataset which is the first of its kind.
Elastic distortion of fingerprints has a negative effect on the performance of fingerprint recognition systems.
We propose the use of a coupled 3D Convolutional Neural Network (3D-CNN) architecture that can map both modalities into a representation space to evaluate the correspondence of audio-visual streams using the learned multimodal features.