Search Results for author: Qiongyi Zhou

Found 3 papers, 3 papers with code

CLIP-MUSED: CLIP-Guided Multi-Subject Visual Neural Information Semantic Decoding

1 code implementation14 Feb 2024 Qiongyi Zhou, Changde Du, Shengpei Wang, Huiguang He

Although prior multi-subject decoding methods have made significant progress, they still suffer from several limitations, including difficulty in extracting global neural response features, linear scaling of model parameters with the number of subjects, and inadequate characterization of the relationship between neural responses of different subjects to various stimuli.

Representation Learning

MindDiffuser: Controlled Image Reconstruction from Human Brain Activity with Semantic and Structural Diffusion

1 code implementation8 Aug 2023 Yizhuo Lu, Changde Du, Qiongyi Zhou, Dianpeng Wang, Huiguang He

In Stage 2, we utilize the CLIP visual feature decoded from fMRI as supervisory information, and continually adjust the two feature vectors decoded in Stage 1 through backpropagation to align the structural information.

Image Reconstruction

Multimodal foundation models are better simulators of the human brain

1 code implementation17 Aug 2022 Haoyu Lu, Qiongyi Zhou, Nanyi Fei, Zhiwu Lu, Mingyu Ding, Jingyuan Wen, Changde Du, Xin Zhao, Hao Sun, Huiguang He, Ji-Rong Wen

Further, from the perspective of neural encoding (based on our foundation model), we find that both visual and lingual encoders trained multimodally are more brain-like compared with unimodal ones.

Cannot find the paper you are looking for? You can Submit a new open access paper.