SpeechBERT: An Audio-and-text Jointly Learned Language Model for End-to-end Spoken Question Answering

25 Oct 2019Yung-Sung ChuangChi-Liang LiuHung-Yi LeeLin-shan Lee

While various end-to-end models for spoken language understanding tasks have been explored recently, this paper is probably the first known attempt to challenge the very difficult task of end-to-end spoken question answering (SQA). Learning from the very successful BERT model for various text processing tasks, here we proposed an audio-and-text jointly learned SpeechBERT model... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.