We have recently seen the emergence of several publicly available Natural Language Understanding (NLU) toolkits, which map user utterances to structured, but more abstract, Dialogue Act (DA) or Intent specifications, while making this process accessible to the lay developer.
In this paper, we introduce the use of Semantic Hashing as embedding for the task of Intent Classification and achieve state-of-the-art performance on three frequently used benchmarks.
We pretrain using a retrieval-based response selection task, effectively leveraging quantization and subword-level parameterization in the dual encoder to build a lightweight memory- and energy-efficient model.
Identifying the intent of a citation in scientific papers (e. g., background information, use of methods, comparing results) is critical for machine reading of individual publications and automated analysis of the scientific literature.
SOTA for Citation Intent Classification on ACL-ARC (using extra training data)
Intent classification and slot filling are two essential tasks for natural language understanding.
Inducing diversity in the task of paraphrasing is an important problem in NLP with applications in data augmentation and conversational agents.
We find that while the classifiers perform well on in-scope intent classification, they struggle to identify out-of-scope queries.