In this paper, we present Mambanet: a hybrid neural network for predicting the outcomes of Basketball games.
More concretely, our CX-ToM framework generates sequence of explanations in a dialog by mediating the differences between the minds of machine and human user.
Our approach exhibited a minimum of ~15% improvement over contemporary approaches when tested on subjects and tasks not used during model training.
We present a new explainable AI (XAI) framework aimed at increasing justified human trust and reliance in the AI machine through explanations.
Fake news spreading through media outlets poses a real threat to the trustworthiness of information and detecting fake news has attracted increasing attention in recent years.