Search Results for author: Xihong Wu

Found 14 papers, 2 papers with code

ConvConcatNet: a deep convolutional neural network to reconstruct mel spectrogram from the EEG

no code implementations10 Jan 2024 Xiran Xu, Bo wang, Yujie Yan, Haolin Zhu, Zechen Zhang, Xihong Wu, Jing Chen

To investigate the processing of speech in the brain, simple linear models are commonly used to establish a relationship between brain signals and speech features.

EEG Task 2

A DenseNet-based method for decoding auditory spatial attention with EEG

1 code implementation14 Sep 2023 Xiran Xu, Bo wang, Yujie Yan, Xihong Wu, Jing Chen

ASAD methods are inspired by the brain lateralization of cortical neural responses during the processing of auditory spatial attention, and show promising performance for the task of auditory attention decoding (AAD) with neural recordings.

EEG

Direct source and early reflections localization using deep deconvolution network under reverberant environment

no code implementations10 Oct 2021 Shan Gao, Xihong Wu, Tianshu Qu

This paper proposes a deconvolution-based network (DCNN) model for DOA estimation of direct source and early reflections under reverberant scenarios.

Auditory Attention Decoding from EEG using Convolutional Recurrent Neural Network

no code implementations3 Mar 2021 Zhen Fu, Bo wang, Xihong Wu, Jing Chen

In this paper, we proposed novel convolutional recurrent neural network (CRNN) based regression model and classification model, and compared them with both the linear model and the state-of-the-art DNN models.

Classification EEG +3

Embodied Self-supervised Learning by Coordinated Sampling and Training

no code implementations20 Jun 2020 Yifan Sun, Xihong Wu

The proposed approach works in an analysis-by-synthesis manner to learn an inference network by iteratively sampling and training.

Self-Supervised Learning

Long Short-Term Memory based Convolutional Recurrent Neural Networks for Large Vocabulary Speech Recognition

no code implementations11 Oct 2016 Xiangang Li, Xihong Wu

Long short-term memory (LSTM) recurrent neural networks (RNNs) have been shown to give state-of-the-art performance on many speech recognition tasks, as they are able to provide the learned dynamically changing contextual window of all sequence history.

speech-recognition Speech Recognition

Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition

no code implementations16 Oct 2014 Xiangang Li, Xihong Wu

Long short-term memory (LSTM) based acoustic modeling methods have recently been shown to give state-of-the-art performance on some speech recognition tasks.

speech-recognition Speech Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.