Slot Filling

94 papers with code • 10 benchmarks • 18 datasets

The goal of Slot Filling is to identify from a running dialog different slots, which correspond to different parameters of the user’s query. For instance, when a user queries for nearby restaurants, key slots for location and preferred food are required for a dialog system to retrieve the appropriate information. Thus, the main challenge in the slot-filling task is to extract the target entity.

Source: Real-time On-Demand Crowd-powered Entity Extraction

Image credit: Robust Retrieval Augmented Generation for Zero-shot Slot Filling

Libraries

Use these libraries to find Slot Filling models and implementations

Most implemented papers

BERT for Joint Intent Classification and Slot Filling

monologg/JointBERT 28 Feb 2019

Intent classification and slot filling are two essential tasks for natural language understanding.

Learning End-to-End Goal-Oriented Dialog

facebookresearch/ParlAI 24 May 2016

We show similar result patterns on data extracted from an online concierge service.

Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling

DSKSD/RNN-for-Joint-NLU 6 Sep 2016

Attention-based encoder-decoder neural network models have recently shown promising results in machine translation and speech recognition.

Data Programming: Creating Large Training Sets, Quickly

HazyResearch/snorkel NeurIPS 2016

Additionally, in initial user studies we observed that data programming may be an easier way for non-experts to create machine learning models when training data is limited or unavailable.

Learning Dense Representations of Phrases at Scale

jhyuklee/DensePhrases ACL 2021

Open-domain question answering can be reformulated as a phrase retrieval problem, without the need for processing documents on-demand during inference (Seo et al., 2019).

Joint Slot Filling and Intent Detection via Capsule Neural Networks

czhang99/Capsule-NLU ACL 2019

Being able to recognize words as slots and detect the intent of an utterance has been a keen issue in natural language understanding.

Towards Scalable Multi-domain Conversational Agents: The Schema-Guided Dialogue Dataset

google-research-datasets/dstc8-schema-guided-dialogue 12 Sep 2019

In this work, we introduce the the Schema-Guided Dialogue (SGD) dataset, containing over 16k multi-domain conversations spanning 16 domains.

KILT: a Benchmark for Knowledge Intensive Language Tasks

facebookresearch/KILT NAACL 2021

We test both task-specific and general baselines, evaluating downstream performance in addition to the ability of the models to provide provenance.

A Knowledge-Grounded Neural Conversation Model

DSTC-MSR-NLP/DSTC7-End-to-End-Conversation-Modeling 7 Feb 2017

We generalize the widely-used Seq2Seq approach by conditioning responses on both conversation history and external "facts", allowing the model to be versatile and applicable in an open-domain setting.

Zero-Shot Relation Extraction via Reading Comprehension

stonybrooknlp/musique CONLL 2017

We show that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot.