Intent Classification
94 papers with code • 5 benchmarks • 13 datasets
Intent Classification is the task of correctly labeling a natural language utterance from a predetermined set of intents
Source: Multi-Layer Ensembling Techniques for Multilingual Intent Classification
Libraries
Use these libraries to find Intent Classification models and implementationsDatasets
Latest papers
ITALIC: An Italian Intent Classification Dataset
Recent large-scale Spoken Language Understanding datasets focus predominantly on English and do not account for language-specific phenomena such as particular phonemes or words in different lects.
Revisit Few-shot Intent Classification with PLMs: Direct Fine-tuning vs. Continual Pre-training
We consider the task of few-shot intent detection, which involves training a deep learning model to classify utterances based on their underlying intents using only a small amount of labeled data.
Pre-training Intent-Aware Encoders for Zero- and Few-Shot Intent Classification
Intent classification (IC) plays an important role in task-oriented dialogue systems.
ChatGPT to Replace Crowdsourcing of Paraphrases for Intent Classification: Higher Diversity and Comparable Model Robustness
The emergence of generative large language models (LLMs) raises the question: what will be its impact on crowdsourcing?
The Interpreter Understands Your Meaning: End-to-end Spoken Language Understanding Aided by Speech Translation
Motivated particularly by the task of cross-lingual SLU, we demonstrate that the task of speech translation (ST) is a good means of pretraining speech models for end-to-end SLU on both intra- and cross-lingual scenarios.
Improving End-to-End SLU performance with Prosodic Attention and Distillation
Most End-to-End SLU methods depend on the pretrained ASR or language model features for intent prediction.
ViMQ: A Vietnamese Medical Question Dataset for Healthcare Dialogue System Development
Existing medical text datasets usually take the form of ques- tion and answer pairs that support the task of natural language gener- ation, but lacking the composite annotations of the medical terms.
CitePrompt: Using Prompts to Identify Citation Intent in Scientific Papers
For the ACL-ARC dataset, we report a 53. 86% F1 score for the zero-shot setting, which improves to 63. 61% and 66. 99% for the 5-shot and 10-shot settings, respectively.
Effective Open Intent Classification with K-center Contrastive Learning and Adjustable Decision Boundary
In this paper, we introduce novel K-center contrastive learning and adjustable decision boundary learning (CLAB) to improve the effectiveness of open intent classification.
Efficient Sequence Transduction by Jointly Predicting Tokens and Durations
TDT models for Speech Recognition achieve better accuracy and up to 2. 82X faster inference than conventional Transducers.