M2M-GAN: Many-to-Many Generative Adversarial Transfer Learning for Person Re-Identification

9 Nov 2018  ·  Wenqi Liang, Guangcong Wang, Jian-Huang Lai, Junyong Zhu ·

Cross-domain transfer learning (CDTL) is an extremely challenging task for the person re-identification (ReID). Given a source domain with annotations and a target domain without annotations, CDTL seeks an effective method to transfer the knowledge from the source domain to the target domain. However, such a simple two-domain transfer learning method is unavailable for the person ReID in that the source/target domain consists of several sub-domains, e.g., camera-based sub-domains. To address this intractable problem, we propose a novel Many-to-Many Generative Adversarial Transfer Learning method (M2M-GAN) that takes multiple source sub-domains and multiple target sub-domains into consideration and performs each sub-domain transferring mapping from the source domain to the target domain in a unified optimization process. The proposed method first translates the image styles of source sub-domains into that of target sub-domains, and then performs the supervised learning by using the transferred images and the corresponding annotations in source domain. As the gap is reduced, M2M-GAN achieves a promising result for the cross-domain person ReID. Experimental results on three benchmark datasets Market-1501, DukeMTMC-reID and MSMT17 show the effectiveness of our M2M-GAN.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here