Search Results for author: Changde Du

Found 10 papers, 7 papers with code

CLIP-MUSED: CLIP-Guided Multi-Subject Visual Neural Information Semantic Decoding

1 code implementation14 Feb 2024 Qiongyi Zhou, Changde Du, Shengpei Wang, Huiguang He

Although prior multi-subject decoding methods have made significant progress, they still suffer from several limitations, including difficulty in extracting global neural response features, linear scaling of model parameters with the number of subjects, and inadequate characterization of the relationship between neural responses of different subjects to various stimuli.

Representation Learning

MindDiffuser: Controlled Image Reconstruction from Human Brain Activity with Semantic and Structural Diffusion

1 code implementation8 Aug 2023 Yizhuo Lu, Changde Du, Qiongyi Zhou, Dianpeng Wang, Huiguang He

In Stage 2, we utilize the CLIP visual feature decoded from fMRI as supervisory information, and continually adjust the two feature vectors decoded in Stage 1 through backpropagation to align the structural information.

Image Reconstruction

MindDiffuser: Controlled Image Reconstruction from Human Brain Activity with Semantic and Structural Diffusion

no code implementations24 Mar 2023 Yizhuo Lu, Changde Du, Dianpeng Wang, Huiguang He

In Stage 1, the VQ-VAE latent representations and the CLIP text embeddings decoded from fMRI are put into the image-to-image process of Stable Diffusion, which yields a preliminary image that contains semantic and structural information.

Image Reconstruction

Multi-view Multi-label Fine-grained Emotion Decoding from Human Brain Activity

1 code implementation26 Oct 2022 Kaicheng Fu, Changde Du, Shengpei Wang, Huiguang He

Existing emotion decoding methods still have two main limitations: one is only decoding a single emotion category from a brain activity pattern and the decoded emotion categories are coarse-grained, which is inconsistent with the complex emotional expression of human; the other is ignoring the discrepancy of emotion expression between the left and right hemispheres of human brain.

Multi-Label Classification

Decoding Visual Neural Representations by Multimodal Learning of Brain-Visual-Linguistic Features

2 code implementations13 Oct 2022 Changde Du, Kaicheng Fu, Jinpeng Li, Huiguang He

Finally, we construct three trimodal matching datasets, and the extensive experiments lead to some interesting conclusions and cognitive insights: 1) decoding novel visual categories from human brain activity is practically possible with good accuracy; 2) decoding models using the combination of visual and linguistic features perform much better than those using either of them alone; 3) visual perception may be accompanied by linguistic influences to represent the semantics of visual stimuli.

Multimodal foundation models are better simulators of the human brain

1 code implementation17 Aug 2022 Haoyu Lu, Qiongyi Zhou, Nanyi Fei, Zhiwu Lu, Mingyu Ding, Jingyuan Wen, Changde Du, Xin Zhao, Hao Sun, Huiguang He, Ji-Rong Wen

Further, from the perspective of neural encoding (based on our foundation model), we find that both visual and lingual encoders trained multimodally are more brain-like compared with unimodal ones.

GREN: Graph-Regularized Embedding Network for Weakly-Supervised Disease Localization in X-ray Images

1 code implementation14 Jul 2021 Baolian Qi, Gangming Zhao, Xin Wei, Changde Du, Chengwei Pan, Yizhou Yu, Jinpeng Li

To model the relationship, we propose the Graph Regularized Embedding Network (GREN), which leverages the intra-image and inter-image information to locate diseases on chest X-ray images.

Decision Making

Efficient and Adaptive Kernelization for Nonlinear Max-margin Multi-view Learning

no code implementations11 Oct 2019 Changying Du, Jia He, Changde Du, Fuzhen Zhuang, Qing He, Guoping Long

Existing multi-view learning methods based on kernel function either require the user to select and tune a single predefined kernel or have to compute and store many Gram matrices to perform multiple kernel learning.

Data Augmentation MULTI-VIEW LEARNING

Semi-supervised Bayesian Deep Multi-modal Emotion Recognition

no code implementations25 Apr 2017 Changde Du, Changying Du, Jinpeng Li, Wei-Long Zheng, Bao-liang Lu, Huiguang He

In this paper, we first build a multi-view deep generative model to simulate the generative process of multi-modality emotional data.

Emotion Recognition Imputation

Sharing deep generative representation for perceived image reconstruction from human brain activity

1 code implementation25 Apr 2017 Changde Du, Changying Du, Huiguang He

Sharing a common latent representation, our joint generative model of external stimulus and brain response is not only "deep" in extracting nonlinear features from visual images, but also powerful in capturing correlations among voxel activities of fMRI recordings.

Bayesian Inference Image Reconstruction

Cannot find the paper you are looking for? You can Submit a new open access paper.