Intent Detection

111 papers with code • 17 benchmarks • 20 datasets

Intent Detection is a task of determining the underlying purpose or goal behind a user's search query given a context. The task plays a significant role in search and recommendations. A traditional approach for intent detection implies using an intent detector model to classify user search query into predefined intent categories, given a context. One of the key challenges of the task implies identifying user intents for cold-start sessions, i.e., search sessions initiated by a non-logged-in or unrecognized user.

Source: Analyzing and Predicting Purchase Intent in E-commerce: Anonymous vs. Identified Customers

Libraries

Use these libraries to find Intent Detection models and implementations

CAE: Mechanism to Diminish the Class Imbalanced in SLU Slot Filling Task

phuongnm94/JointBERT_CAE Advances in Computational Collective Intelligence 2022

In the success of the pre-trained BERT model, NLU is addressed by Intent Classification and Slot Filling task with significant improvement performance.

4
21 Sep 2022

From Disfluency Detection to Intent Detection and Slot Filling

vinairesearch/phoatis_disfluency 17 Sep 2022

We present the first empirical study investigating the influence of disfluency detection on downstream tasks of intent detection and slot filling.

7
17 Sep 2022

Multi-grained Label Refinement Network with Dependency Structures for Joint Intent Detection and Slot Filling

ZovanZhou/MLRN 9 Sep 2022

To capture the semantic dependency between the syntactic information and task labels, we combine the task specific features with corresponding label embeddings by attention mechanism.

5
09 Sep 2022

Z-BERT-A: a zero-shot Pipeline for Unknown Intent detection

gt4sd/zberta 15 Aug 2022

In our evaluation, we first analyze the quality of the model after adaptive fine-tuning on known classes.

38
15 Aug 2022

Pre-training Tasks for User Intent Detection and Embedding Retrieval in E-commerce Search

jdcomsearch/jdsearch-22 12 Aug 2022

BERT-style models pre-trained on the general corpus (e. g., Wikipedia) and fine-tuned on specific task corpus, have recently emerged as breakthrough techniques in many NLP tasks: question answering, text classification, sequence labeling and so on.

21
12 Aug 2022

Learning Dialogue Representations from Consecutive Utterances

amazon-research/dse NAACL 2022

In this paper, we introduce Dialogue Sentence Embedding (DSE), a self-supervised contrastive learning method that learns effective dialogue representations suitable for a wide range of dialogue tasks.

44
26 May 2022

InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning

prakharguptaz/Instructdial 25 May 2022

We introduce InstructDial, an instruction tuning framework for dialogue, which consists of a repository of 48 diverse dialogue tasks in a unified text-to-text format created from 59 openly available dialogue datasets.

94
25 May 2022

DeepStruct: Pretraining of Language Models for Structure Prediction

cgraywang/deepstruct Findings (ACL) 2022

We introduce a method for improving the structural understanding abilities of language models.

79
21 May 2022

Fine-tuning Pre-trained Language Models for Few-shot Intent Detection: Supervised Pre-training and Isotropization

fanolabs/isointentbert-main NAACL 2022

It is challenging to train a good intent classifier for a task-oriented dialogue system with only a few annotations.

15
15 May 2022

A Framework to Generate High-Quality Datapoints for Multiple Novel Intent Detection

sukannyapurkayastha/mnid Findings (NAACL) 2022

However, the newer intents may not be explicitly announced and need to be inferred dynamically.

0
04 May 2022