Autoencoding Language Model Based Ensemble Learning for Commonsense Validation and Explanation

7 Apr 2022  ·  Ngo Quang Huy, Tu Minh Phuong, Ngo Xuan Bach ·

An ultimate goal of artificial intelligence is to build computer systems that can understand human languages. Understanding commonsense knowledge about the world expressed in text is one of the foundational and challenging problems to create such intelligent systems. As a step towards this goal, we present in this paper ALMEn, an Autoencoding Language Model based Ensemble learning method for commonsense validation and explanation. By ensembling several advanced pre-trained language models including RoBERTa, DeBERTa, and ELECTRA with Siamese neural networks, our method can distinguish natural language statements that are against commonsense (validation subtask) and correctly identify the reason for making against commonsense (explanation selection subtask). Experimental results on the benchmark dataset of SemEval-2020 Task 4 show that our method outperforms state-of-the-art models, reaching 97.9% and 95.4% accuracies on the validation and explanation selection subtasks, respectively.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods