We have recently seen the emergence of several publicly available Natural Language Understanding (NLU) toolkits, which map user utterances to structured, but more abstract, Dialogue Act (DA) or Intent specifications, while making this process accessible to the lay developer.
We pretrain using a retrieval-based response selection task, effectively leveraging quantization and subword-level parameterization in the dual encoder to build a lightweight memory- and energy-efficient model.
In this paper, we introduce the use of Semantic Hashing as embedding for the task of Intent Classification and achieve state-of-the-art performance on three frequently used benchmarks.
Intent classification and slot filling are two essential tasks for natural language understanding.
Identifying the intent of a citation in scientific papers (e. g., background information, use of methods, comparing results) is critical for machine reading of individual publications and automated analysis of the scientific literature.
SOTA for Citation Intent Classification on ACL-ARC (using extra training data)
Inducing diversity in the task of paraphrasing is an important problem in NLP with applications in data augmentation and conversational agents.
We find that while the classifiers perform well on in-scope intent classification, they struggle to identify out-of-scope queries.
This is due to the fact that current approaches are built for and trained with clean and complete data, and thus are not able to extract features that can adequately represent incomplete data.