Intent Classification

59 papers with code • 5 benchmarks • 10 datasets

Intent Classification is the task of correctly labeling a natural language utterance from a predetermined set of intents

Source: Multi-Layer Ensembling Techniques for Multilingual Intent Classification

Libraries

Use these libraries to find Intent Classification models and implementations

Most implemented papers

BERT for Joint Intent Classification and Slot Filling

monologg/JointBERT 28 Feb 2019

Intent classification and slot filling are two essential tasks for natural language understanding.

Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling

DSKSD/RNN-for-Joint-NLU 6 Sep 2016

Attention-based encoder-decoder neural network models have recently shown promising results in machine translation and speech recognition.

Induction Networks for Few-Shot Text Classification

zhongyuchen/few-shot-text-classification IJCNLP 2019

Therefore, we should be able to learn a general representation of each class in the support set and then compare it to new queries.

Benchmarking Natural Language Understanding Services for building Conversational Agents

xliuhw/NLU-Evaluation-Data 13 Mar 2019

We have recently seen the emergence of several publicly available Natural Language Understanding (NLU) toolkits, which map user utterances to structured, but more abstract, Dialogue Act (DA) or Intent specifications, while making this process accessible to the lay developer.

An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction

clinc/oos-eval IJCNLP 2019

We find that while the classifiers perform well on in-scope intent classification, they struggle to identify out-of-scope queries.

Subword Semantic Hashing for Intent Classification on Small Datasets

kumar-shridhar/Know-Your-Intent 16 Oct 2018

In this paper, we introduce the use of Semantic Hashing as embedding for the task of Intent Classification and achieve state-of-the-art performance on three frequently used benchmarks.

ConveRT: Efficient and Accurate Conversational Representations from Transformers

davidalami/convert Findings of the Association for Computational Linguistics 2020

General-purpose pretrained sentence encoders such as BERT are not ideal for real-world conversational AI applications; they are computationally heavy, slow, and expensive to train.

Revisiting Mahalanobis Distance for Transformer-Based Out-of-Domain Detection

huawei-noah/noah-research 11 Jan 2021

In turn, the Mahalanobis distance captures this disparity easily.

The First Evaluation of Chinese Human-Computer Dialogue Technology

InsaneLife/ChineseNLPCorpus 29 Sep 2017

In this paper, we introduce the first evaluation of Chinese human-computer dialogue technology.