A Self-Attention Joint Model for Spoken Language Understanding in Situational Dialog Applications

27 May 2019Mengyang ChenJin ZengJie Lou

Spoken language understanding (SLU) acts as a critical component in goal-oriented dialog systems. It typically involves identifying the speakers intent and extracting semantic slots from user utterances, which are known as intent detection (ID) and slot filling (SF)... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.