Slot Filling
129 papers with code • 12 benchmarks • 21 datasets
The goal of Slot Filling is to identify from a running dialog different slots, which correspond to different parameters of the user’s query. For instance, when a user queries for nearby restaurants, key slots for location and preferred food are required for a dialog system to retrieve the appropriate information. Thus, the main challenge in the slot-filling task is to extract the target entity.
Source: Real-time On-Demand Crowd-powered Entity Extraction
Image credit: Robust Retrieval Augmented Generation for Zero-shot Slot Filling
Libraries
Use these libraries to find Slot Filling models and implementationsMost implemented papers
Zero-Shot Relation Extraction via Reading Comprehension
We show that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot.
Position-aware Attention and Supervised Data Improve Slot Filling
The combination of better supervised data and a more appropriate high-capacity model enables much better relation extraction performance.
Slot-Gated Modeling for Joint Slot Filling and Intent Prediction
Attention-based recurrent neural network models for joint intent detection and slot filling have achieved the state-of-the-art performance, while they have independent attention weights.
A Novel Bi-directional Interrelated Model for Joint Intent Detection and Slot Filling
The joint model for the two tasks is becoming a tendency in SLU.
A Stack-Propagation Framework with Token-Level Intent Detection for Spoken Language Understanding
In our framework, we adopt a joint model with Stack-Propagation which can directly use the intent information as input for slot filling, thus to capture the intent semantic knowledge.
CM-Net: A Novel Collaborative Memory Network for Spoken Language Understanding
Spoken Language Understanding (SLU) mainly involves two tasks, intent detection and slot filling, which are generally modeled jointly in existing works.
Extracting a Knowledge Base of COVID-19 Events from Social Media
In this paper, we present a manually annotated corpus of 10, 000 tweets containing public reports of five COVID-19 events, including positive and negative tests, deaths, denied access to testing, claimed cures and preventions.
Zero-shot Slot Filling with DPR and RAG
Recently, there has been a promising direction in evaluating language models in the same way we would evaluate knowledge bases, and the task of slot filling is the most suitable to this intent.
From Masked Language Modeling to Translation: Non-English Auxiliary Tasks Improve Zero-shot Spoken Language Understanding
To tackle the challenge, we propose a joint learning approach, with English SLU training data and non-English auxiliary tasks from raw text, syntax and translation for transfer.
Robust Retrieval Augmented Generation for Zero-shot Slot Filling
Automatically inducing high quality knowledge graphs from a given collection of documents still remains a challenging problem in AI.