Intent Detection
111 papers with code • 17 benchmarks • 20 datasets
Intent Detection is a task of determining the underlying purpose or goal behind a user's search query given a context. The task plays a significant role in search and recommendations. A traditional approach for intent detection implies using an intent detector model to classify user search query into predefined intent categories, given a context. One of the key challenges of the task implies identifying user intents for cold-start sessions, i.e., search sessions initiated by a non-logged-in or unrecognized user.
Source: Analyzing and Predicting Purchase Intent in E-commerce: Anonymous vs. Identified Customers
Libraries
Use these libraries to find Intent Detection models and implementationsLatest papers
CAE: Mechanism to Diminish the Class Imbalanced in SLU Slot Filling Task
In the success of the pre-trained BERT model, NLU is addressed by Intent Classification and Slot Filling task with significant improvement performance.
From Disfluency Detection to Intent Detection and Slot Filling
We present the first empirical study investigating the influence of disfluency detection on downstream tasks of intent detection and slot filling.
Multi-grained Label Refinement Network with Dependency Structures for Joint Intent Detection and Slot Filling
To capture the semantic dependency between the syntactic information and task labels, we combine the task specific features with corresponding label embeddings by attention mechanism.
Z-BERT-A: a zero-shot Pipeline for Unknown Intent detection
In our evaluation, we first analyze the quality of the model after adaptive fine-tuning on known classes.
Pre-training Tasks for User Intent Detection and Embedding Retrieval in E-commerce Search
BERT-style models pre-trained on the general corpus (e. g., Wikipedia) and fine-tuned on specific task corpus, have recently emerged as breakthrough techniques in many NLP tasks: question answering, text classification, sequence labeling and so on.
Learning Dialogue Representations from Consecutive Utterances
In this paper, we introduce Dialogue Sentence Embedding (DSE), a self-supervised contrastive learning method that learns effective dialogue representations suitable for a wide range of dialogue tasks.
InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning
We introduce InstructDial, an instruction tuning framework for dialogue, which consists of a repository of 48 diverse dialogue tasks in a unified text-to-text format created from 59 openly available dialogue datasets.
DeepStruct: Pretraining of Language Models for Structure Prediction
We introduce a method for improving the structural understanding abilities of language models.
Fine-tuning Pre-trained Language Models for Few-shot Intent Detection: Supervised Pre-training and Isotropization
It is challenging to train a good intent classifier for a task-oriented dialogue system with only a few annotations.
A Framework to Generate High-Quality Datapoints for Multiple Novel Intent Detection
However, the newer intents may not be explicitly announced and need to be inferred dynamically.