Intent Classification is the task of correctly labeling a natural language utterance from a predetermined set of intents
We have recently seen the emergence of several publicly available Natural Language Understanding (NLU) toolkits, which map user utterances to structured, but more abstract, Dialogue Act (DA) or Intent specifications, while making this process accessible to the lay developer.
In this paper, we introduce the first evaluation of Chinese human-computer dialogue technology.
Attention-based encoder-decoder neural network models have recently shown promising results in machine translation and speech recognition.
Ranked #3 on Intent Detection on ATIS
In this paper, we introduce the use of Semantic Hashing as embedding for the task of Intent Classification and achieve state-of-the-art performance on three frequently used benchmarks.
We find that while the classifiers perform well on in-scope intent classification, they struggle to identify out-of-scope queries.
Identifying the intent of a citation in scientific papers (e. g., background information, use of methods, comparing results) is critical for machine reading of individual publications and automated analysis of the scientific literature.
Ranked #1 on Citation Intent Classification on ACL-ARC (using extra training data)
Inducing diversity in the task of paraphrasing is an important problem in NLP with applications in data augmentation and conversational agents.
General-purpose pretrained sentence encoders such as BERT are not ideal for real-world conversational AI applications; they are computationally heavy, slow, and expensive to train.
Ranked #1 on Conversational Response Selection on PolyAI Reddit