Reconstructing Perceived Images from Brain Activity by Visually-guided Cognitive Representation and Adversarial Learning

27 Jun 2019  ·  Ziqi Ren, Jie Li, Xuetong Xue, Xin Li, Fan Yang, Zhicheng Jiao, Xinbo Gao ·

Reconstructing visual stimulus (image) only from human brain activity measured with functional Magnetic Resonance Imaging (fMRI) is a significant and meaningful task in Human-AI collaboration. However, the inconsistent distribution and representation between fMRI signals and visual images cause the heterogeneity gap. Moreover, the fMRI data is often extremely high-dimensional and contains a lot of visually-irrelevant information. Existing methods generally suffer from these issues so that a satisfactory reconstruction is still challenging. In this paper, we show that it is possible to overcome these challenges by learning visually-guided cognitive latent representations from the fMRI signals, and inversely decoding them to the image stimuli. The resulting framework is called Dual-Variational Autoencoder/ Generative Adversarial Network (D-VAE/GAN), which combines the advantages of adversarial representation learning with knowledge distillation. In addition, we introduce a novel three-stage learning approach which enables the (cognitive) encoder to gradually distill useful knowledge from the paired (visual) encoder during the learning process. Extensive experimental results on both artificial and natural images have demonstrated that our method could achieve surprisingly good results and outperform all other alternatives.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here