KACE: Generating Knowledge Aware Contrastive Explanations for Natural Language Inference

In order to better understand the reason behind model behaviors (i.e., making predictions), most recent works have exploited generative models to provide complementary explanations. However, existing approaches in NLP mainly focus on {``}WHY A{''} rather than contrastive {``}WHY A NOT B{''}, which is shown to be able to better distinguish confusing candidates and improve data efficiency in other research fields.In this paper, we focus on generating contrastive explanations with counterfactual examples in NLI and propose a novel \textbf{K}nowledge-\textbf{A}ware \textbf{C}ontrastive \textbf{E}xplanation generation framework (\textbf{KACE}).Specifically, we first identify rationales (i.e., key phrases) from input sentences, and use them as key perturbations for generating counterfactual examples. After obtaining qualified counterfactual examples, we take them along with original examples and external knowledge as input, and employ a knowledge-aware generative pre-trained language model to generate contrastive explanations. Experimental results show that contrastive explanations are beneficial to fit the scenarios by clarifying the difference between the predicted answer and other possible wrong ones. Moreover, we train an NLI model enhanced with contrastive explanations and achieves an accuracy of 91.9{\%} on SNLI, gaining improvements of 5.7{\%} against ETPA ({``}Explain-Then-Predict-Attention{''}) and 0.6{\%} against NILE ({``}WHY A{''}).

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here