Counterfactual Detection meets Transfer Learning

We can consider Counterfactuals as belonging in the domain of Discourse structure and semantics, A core area in Natural Language Understanding and in this paper, we introduce an approach to resolving counterfactual detection as well as the indexing of the antecedents and consequents of Counterfactual statements. While Transfer learning is already being applied to several NLP tasks, It has the characteristics to excel in a novel number of tasks. We show that detecting Counterfactuals is a straightforward Binary Classification Task that can be implemented with minimal adaptation on already existing model Architectures, thanks to a well annotated training data set,and we introduce a new end to end pipeline to process antecedents and consequents as an entity recognition task, thus adapting them into Token Classification.

Results in Papers With Code
(↓ scroll down to see all results)