no code implementations • 21 Aug 2018 • Mingkuan Yuan, Yuxin Peng
For addressing these problems, we exploit the excellent capability of generic discriminative models (e. g. VGG19), which can guide the training process of a new generative model on multiple levels to bridge the two gaps.
no code implementations • 7 Feb 2018 • Jian Zhang, Yuxin Peng, Mingkuan Yuan
(2) Ignore the rich information contained in the large amount of unlabeled data across different modalities, especially the margin examples that are easily to be incorrectly retrieved, which can help to model the correlations.
no code implementations • 1 Dec 2017 • Jian Zhang, Yuxin Peng, Mingkuan Yuan
To address the above problem, in this paper we propose an Unsupervised Generative Adversarial Cross-modal Hashing approach (UGACH), which makes full use of GAN's ability for unsupervised representation learning to exploit the underlying manifold structure of cross-modal data.
no code implementations • 8 Aug 2017 • Xin Huang, Yuxin Peng, Mingkuan Yuan
Transfer learning is for relieving the problem of insufficient training data, but it mainly focuses on knowledge transfer only from large-scale datasets as single-modal source domain to single-modal target domain.
no code implementations • 1 Jun 2017 • Xin Huang, Yuxin Peng, Mingkuan Yuan
Knowledge in source domain cannot be directly transferred to both two different modalities in target domain, and the inherent cross-modal correlation contained in target domain provides key hints for cross-modal retrieval which should be preserved during transfer process.