Matching Thermal to Visible Face Images Using a Semantic-Guided Generative Adversarial Network

3 Mar 2019  ·  Cunjian Chen, Arun Ross ·

Designing face recognition systems that are capable of matching face images obtained in the thermal spectrum with those obtained in the visible spectrum is a challenging problem. In this work, we propose the use of semantic-guided generative adversarial network (SG-GAN) to automatically synthesize visible face images from their thermal counterparts. Specifically, semantic labels, extracted by a face parsing network, are used to compute a semantic loss function to regularize the adversarial network during training. These semantic cues denote high-level facial component information associated with each pixel. Further, an identity extraction network is leveraged to generate multi-scale features to compute an identity loss function. To achieve photo-realistic results, a perceptual loss function is introduced during network training to ensure that the synthesized visible face is perceptually similar to the target visible face image. We extensively evaluate the benefits of individual loss functions, and combine them effectively to learn the mapping from thermal to visible face images. Experiments involving two multispectral face datasets show that the proposed method achieves promising results in both face synthesis and cross-spectral face matching.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here