Search Results for author: Mingkuan Yuan

Found 5 papers, 0 papers with code

Text-to-image Synthesis via Symmetrical Distillation Networks

no code implementations21 Aug 2018 Mingkuan Yuan, Yuxin Peng

For addressing these problems, we exploit the excellent capability of generic discriminative models (e. g. VGG19), which can guide the training process of a new generative model on multiple levels to bridge the two gaps.

Image Generation

SCH-GAN: Semi-supervised Cross-modal Hashing by Generative Adversarial Network

no code implementations7 Feb 2018 Jian Zhang, Yuxin Peng, Mingkuan Yuan

(2) Ignore the rich information contained in the large amount of unlabeled data across different modalities, especially the margin examples that are easily to be incorrectly retrieved, which can help to model the correlations.

Generative Adversarial Network Retrieval

Unsupervised Generative Adversarial Cross-modal Hashing

no code implementations1 Dec 2017 Jian Zhang, Yuxin Peng, Mingkuan Yuan

To address the above problem, in this paper we propose an Unsupervised Generative Adversarial Cross-modal Hashing approach (UGACH), which makes full use of GAN's ability for unsupervised representation learning to exploit the underlying manifold structure of cross-modal data.

Cross-Modal Retrieval Generative Adversarial Network +2

MHTN: Modal-adversarial Hybrid Transfer Network for Cross-modal Retrieval

no code implementations8 Aug 2017 Xin Huang, Yuxin Peng, Mingkuan Yuan

Transfer learning is for relieving the problem of insufficient training data, but it mainly focuses on knowledge transfer only from large-scale datasets as single-modal source domain to single-modal target domain.

Cross-Modal Retrieval Representation Learning +2

Cross-modal Common Representation Learning by Hybrid Transfer Network

no code implementations1 Jun 2017 Xin Huang, Yuxin Peng, Mingkuan Yuan

Knowledge in source domain cannot be directly transferred to both two different modalities in target domain, and the inherent cross-modal correlation contained in target domain provides key hints for cross-modal retrieval which should be preserved during transfer process.

Cross-Modal Retrieval Representation Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.