Research in NLP has mainly focused on factoid questions, with the goal of finding quick and reliable ways of matching a query to an answer.
We have then used this framework to compare each of the surveyed companies to find differences in areas of emphasis.
This paper systematically derives design dimensions for the structured evaluation of explainable artificial intelligence (XAI) approaches.
We present a framework that allows users to incorporate the semantics of their domain knowledge for topic model refinement while remaining model-agnostic.
We present a modular framework for the rapid-prototyping of linguistic, web-based, visual analytics applications.
Ensembles of classifier models typically deliver superior performance and can outperform single classifier models given a dataset and classification task at hand.