Research in NLP has mainly focused on factoid questions, with the goal of finding quick and reliable ways of matching a query to an answer.
Despite the success of contextualized language models on various NLP tasks, it is still unclear what these models really learn.
Advances in Natural Language Inference (NLI) have helped us understand what state-of-the-art models really learn and what their generalization power is.
The study of language change through parallel corpora can be advantageous for the analysis of complex interactions between time, text domain and language.
The results show that experts prefer our system over existing solutions due to the speedup provided by the automatic suggestions and the tight integration between text and graph views.
We present a modular framework for the rapid-prototyping of linguistic, web-based, visual analytics applications.