Brain Decoding
24 papers with code • 2 benchmarks • 3 datasets
Motor Brain Decoding is fundamental task for building motor brain computer interfaces (BCI).
Progress in predicting finger movements based on brain activity allows us to restore motor functions and improve rehabilitation process of patients.
Datasets
Latest papers
MindBridge: A Cross-Subject Brain Decoding Framework
Currently, brain decoding is confined to a per-subject-per-model paradigm, limiting its applicability to the same individual for whom the decoding model is trained.
A Conversational Brain-Artificial Intelligence Interface
We introduce Brain-Artificial Intelligence Interfaces (BAIs) as a new class of Brain-Computer Interfaces (BCIs).
Brain-Conditional Multimodal Synthesis: A Survey and Taxonomy
This survey comprehensively examines the emerging field of AIGC-based Brain-conditional Multimodal Synthesis, termed AIGC-Brain, to delineate the current landscape and future directions.
Brain-optimized inference improves reconstructions of fMRI brain activity
At each iteration, we sample a small library of images from an image distribution (a diffusion model) conditioned on a seed reconstruction from the previous iteration.
Decoding visual brain representations from electroencephalography through Knowledge Distillation and latent diffusion models
Additionally, we incorporated an image reconstruction mechanism based on pre-trained latent diffusion models, which allowed us to generate an estimate of the images which had elicited EEG activity.
Memory Encoding Model
Our ensemble model without memory input (61. 4) can also stand a 3rd place.
JGAT: a joint spatio-temporal graph attention model for brain decoding
However, traditional approaches for integrating FC and SC overlook the dynamical variations, which stand a great chance to over-generalize the brain neural network.
Structural Similarities Between Language Models and Neural Response Measurements
Human language processing is also opaque, but neural response measurements can provide (noisy) recordings of activation during listening or reading, from which we can extract similar representations of words and phrases.
Second Sight: Using brain-optimized encoding models to align image distributions with human brain activity
This emphasis belies the fact that there is always a family of images that are equally compatible with any evoked brain activity pattern, and the fact that many image-generators are inherently stochastic and do not by themselves offer a method for selecting the single best reconstruction from among the samples they generate.
Contrast, Attend and Diffuse to Decode High-Resolution Images from Brain Activities
The second phase tunes the feature learner to attend to neural activation patterns most informative for visual reconstruction with guidance from an image auto-encoder.