Joint Multiple Intent Detection and Slot Filling with Supervised Contrastive Learning and Self-Distillation

28 Aug 2023  ·  Nguyen Anh Tu, Hoang Thi Thu Uyen, Tu Minh Phuong, Ngo Xuan Bach ·

Multiple intent detection and slot filling are two fundamental and crucial tasks in spoken language understanding. Motivated by the fact that the two tasks are closely related, joint models that can detect intents and extract slots simultaneously are preferred to individual models that perform each task independently. The accuracy of a joint model depends heavily on the ability of the model to transfer information between the two tasks so that the result of one task can correct the result of the other. In addition, since a joint model has multiple outputs, how to train the model effectively is also challenging. In this paper, we present a method for multiple intent detection and slot filling by addressing these challenges. First, we propose a bidirectional joint model that explicitly employs intent information to recognize slots and slot features to detect intents. Second, we introduce a novel method for training the proposed joint model using supervised contrastive learning and self-distillation. Experimental results on two benchmark datasets MixATIS and MixSNIPS show that our method outperforms state-of-the-art models in both tasks. The results also demonstrate the contributions of both bidirectional design and the training method to the accuracy improvement. Our source code is available at

PDF Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semantic Frame Parsing MixATIS BiSLU Accuracy 0.515 # 1
Slot Filling MixATIS BiSLU F1 0.894 # 1
Intent Detection MixATIS BiSLU Accuracy 0.815 # 1
Semantic Frame Parsing MixSNIPS BiSLU Accuracy 0.854 # 1
Slot Filling MixSNIPS BiSLU F1 0.972 # 2
Intent Detection MixSNIPS BiSLU Accuracy 0.978 # 2