Search Results for author: Takashi Nose1

Found 1 papers, 0 papers with code

Multi-stream Attention-based BLSTM with Feature Segmentation for Speech Emotion Recognition

no code implementations Interspeech 2020 Yuya Chiba1, Takashi Nose1, Akinori Ito

One of the model’s weaknesses is that it cannot consider the statistics of speech features, which are known to be effective for speech emotion recognition.

Data Augmentation Emotional Speech Synthesis +1

Cannot find the paper you are looking for? You can Submit a new open access paper.