Slot Filling

129 papers with code • 12 benchmarks • 21 datasets

The goal of Slot Filling is to identify from a running dialog different slots, which correspond to different parameters of the user’s query. For instance, when a user queries for nearby restaurants, key slots for location and preferred food are required for a dialog system to retrieve the appropriate information. Thus, the main challenge in the slot-filling task is to extract the target entity.

Source: Real-time On-Demand Crowd-powered Entity Extraction

Image credit: Robust Retrieval Augmented Generation for Zero-shot Slot Filling

Libraries

Use these libraries to find Slot Filling models and implementations

Latest papers with no code

Prompt Perturbation Consistency Learning for Robust Language Models

no code yet • 24 Feb 2024

However, their performance on sequence labeling tasks such as intent classification and slot filling (IC-SF), which is a central component in personal assistant systems, lags significantly behind discriminative models.

API-BLEND: A Comprehensive Corpora for Training and Benchmarking API LLMs

no code yet • 23 Feb 2024

There is a growing need for Large Language Models (LLMs) to effectively use tools and external Application Programming Interfaces (APIs) to plan and complete tasks.

Noise-BERT: A Unified Perturbation-Robust Framework with Noise Alignment Pre-training for Noisy Slot Filling Task

no code yet • 22 Feb 2024

In this study, we address the challenges posed by input perturbations in slot filling by proposing Noise-BERT, a unified Perturbation-Robust Framework with Noise Alignment Pre-training.

Decoupling Representation and Knowledge for Few-Shot Intent Classification and Slot Filling

no code yet • 21 Dec 2023

Therefore, current works first train a model on source domains with sufficiently labeled data, and then transfer the model to target domains where only rarely labeled data is available.

Co-guiding for Multi-intent Spoken Language Understanding

no code yet • 22 Nov 2023

For the first stage, we propose single-task supervised contrastive learning, and for the second stage, we propose co-guiding supervised contrastive learning, which considers the two tasks' mutual guidances in the contrastive learning procedure.

Speech-based Slot Filling using Large Language Models

no code yet • 13 Nov 2023

Recently, advancements in large language models (LLMs) have shown an unprecedented ability across various language tasks.

Schema Graph-Guided Prompt for Multi-Domain Dialogue State Tracking

no code yet • 10 Nov 2023

Tracking dialogue states is an essential topic in task-oriented dialogue systems, which involve filling in the necessary information in pre-defined slots corresponding to a schema.

Improving End-to-End Speech Processing by Efficient Text Data Utilization with Latent Synthesis

no code yet • 9 Oct 2023

For SLU, LaSyn improves our E2E baseline by absolute 4. 1% for intent classification accuracy and 3. 8% for slot filling SLU-F1 on SLURP, and absolute 4. 49% and 2. 25% for exact match (EM) and EM-Tree accuracies on STOP respectively.

Towards Robust and Generalizable Training: An Empirical Study of Noisy Slot Filling for Input Perturbations

no code yet • 5 Oct 2023

The proposed dataset contains five types of human-annotated noise, and all those noises are exactly existed in real extensive robust-training methods of slot filling into the proposed framework.

Prompting and Adapter Tuning for Self-supervised Encoder-Decoder Speech Model

no code yet • 4 Oct 2023

Notably, in the low-resource scenario, prompting consistently outperforms adapter tuning.