Intent Recognition
21 papers with code • 1 benchmarks • 3 datasets
Most implemented papers
Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.
TEXTOIR: An Integrated and Visualized Platform for Text Open Intent Recognition
It is composed of two main modules: open intent detection and open intent discovery.
Call Larisa Ivanovna: Code-Switching Fools Multilingual NLU Models
This is in line with the common understanding of how multilingual models conduct transferring between languages
Training Compute-Optimal Large Language Models
We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget.
Do We Need Online NLU Tools?
In this paper, we suggest criteria to choose the best intent recognition algorithm for an application.
Continual Learning in Task-Oriented Dialogue Systems
Continual learning in task-oriented dialogue systems can allow us to add new domains and functionalities through time without incurring the high cost of a whole system retraining.
Are Pretrained Transformers Robust in Intent Classification? A Missing Ingredient in Evaluation of Out-of-Scope Intent Detection
Pre-trained Transformer-based models were reported to be robust in intent classification.
Representation based meta-learning for few-shot spoken intent recognition
Spoken intent detection has become a popular approach to interface with various smart devices with ease.
Developing a Chatbot system using Deep Learning based for Universities consultancy
Besides, we use the Deep Reinforcement Learning architecture to train an Agent for Dialogue Management task.
When More Data Hurts: A Troubling Quirk in Developing Broad-Coverage Natural Language Understanding Systems
Rejecting class imbalance as the sole culprit, we reveal that the trend is closely associated with an effect we call source signal dilution, where strong lexical cues for the new symbol become diluted as the training dataset grows.