Spoken Language Understanding
106 papers with code • 5 benchmarks • 13 datasets
Libraries
Use these libraries to find Spoken Language Understanding models and implementationsDatasets
Most implemented papers
Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces
This paper presents the machine learning architecture of the Snips Voice Platform, a software solution to perform Spoken Language Understanding on microprocessors typical of IoT devices.
SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering
Conversational question answering (CQA) is a novel QA task that requires understanding of dialogue context.
SpeechBrain: A General-Purpose Speech Toolkit
SpeechBrain is an open-source and all-in-one speech toolkit.
Branchformer: Parallel MLP-Attention Architectures to Capture Local and Global Context for Speech Recognition and Understanding
Conformer has proven to be effective in many speech processing tasks.
Slot-Gated Modeling for Joint Slot Filling and Intent Prediction
Attention-based recurrent neural network models for joint intent detection and slot filling have achieved the state-of-the-art performance, while they have independent attention weights.
Spoken Language Understanding on the Edge
We consider the problem of performing Spoken Language Understanding (SLU) on small devices typical of IoT applications.
A Novel Bi-directional Interrelated Model for Joint Intent Detection and Slot Filling
The joint model for the two tasks is becoming a tendency in SLU.
A Stack-Propagation Framework with Token-Level Intent Detection for Spoken Language Understanding
In our framework, we adopt a joint model with Stack-Propagation which can directly use the intent information as input for slot filling, thus to capture the intent semantic knowledge.
CM-Net: A Novel Collaborative Memory Network for Spoken Language Understanding
Spoken Language Understanding (SLU) mainly involves two tasks, intent detection and slot filling, which are generally modeled jointly in existing works.
Using Speech Synthesis to Train End-to-End Spoken Language Understanding Models
End-to-end models are an attractive new approach to spoken language understanding (SLU) in which the meaning of an utterance is inferred directly from the raw audio without employing the standard pipeline composed of a separately trained speech recognizer and natural language understanding module.