Slot Filling

129 papers with code • 12 benchmarks • 21 datasets

The goal of Slot Filling is to identify from a running dialog different slots, which correspond to different parameters of the user’s query. For instance, when a user queries for nearby restaurants, key slots for location and preferred food are required for a dialog system to retrieve the appropriate information. Thus, the main challenge in the slot-filling task is to extract the target entity.

Source: Real-time On-Demand Crowd-powered Entity Extraction

Image credit: Robust Retrieval Augmented Generation for Zero-shot Slot Filling

Libraries

Use these libraries to find Slot Filling models and implementations

Most implemented papers

Zero-Shot Relation Extraction via Reading Comprehension

stonybrooknlp/musique CONLL 2017

We show that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot.

Position-aware Attention and Supervised Data Improve Slot Filling

yuhaozhang/tacred-relation EMNLP 2017

The combination of better supervised data and a more appropriate high-capacity model enables much better relation extraction performance.

Slot-Gated Modeling for Joint Slot Filling and Intent Prediction

MiuLab/SlotGated-SLU NAACL 2018

Attention-based recurrent neural network models for joint intent detection and slot filling have achieved the state-of-the-art performance, while they have independent attention weights.

A Stack-Propagation Framework with Token-Level Intent Detection for Spoken Language Understanding

LeePleased/StackPropagation-SLU IJCNLP 2019

In our framework, we adopt a joint model with Stack-Propagation which can directly use the intent information as input for slot filling, thus to capture the intent semantic knowledge.

CM-Net: A Novel Collaborative Memory Network for Spoken Language Understanding

Adaxry/CM-Net IJCNLP 2019

Spoken Language Understanding (SLU) mainly involves two tasks, intent detection and slot filling, which are generally modeled jointly in existing works.

Extracting a Knowledge Base of COVID-19 Events from Social Media

viczong/extract_covid19_events_from_twitter COLING 2022

In this paper, we present a manually annotated corpus of 10, 000 tweets containing public reports of five COVID-19 events, including positive and negative tests, deaths, denied access to testing, claimed cures and preventions.

Zero-shot Slot Filling with DPR and RAG

IBM/retrieve-write-slot-filling 17 Apr 2021

Recently, there has been a promising direction in evaluating language models in the same way we would evaluate knowledge bases, and the task of slot filling is the most suitable to this intent.

From Masked Language Modeling to Translation: Non-English Auxiliary Tasks Improve Zero-shot Spoken Language Understanding

Kaleidophon/deep-significance NAACL 2021

To tackle the challenge, we propose a joint learning approach, with English SLU training data and non-English auxiliary tasks from raw text, syntax and translation for transfer.

Robust Retrieval Augmented Generation for Zero-shot Slot Filling

ibm/kgi-slot-filling EMNLP 2021

Automatically inducing high quality knowledge graphs from a given collection of documents still remains a challenging problem in AI.