CrossCheck: Rapid, Reproducible, and Interpretable Model Evaluation

NAACL (DaSH) 2021  ·  Arendt Dustin, Huang Zhuanyi, Shrestha Prasha, Ayton Ellyn, Glenski Maria, Volkova Svitlana ·

Evaluation beyond aggregate performance metrics, e.g. F1-score, is crucial to both establish an appropriate level of trust in machine learning models and identify future model improvements. In this paper we demonstrate CrossCheck, an interactive visualization tool for rapid crossmodel comparison and reproducible error analysis. We describe the tool and discuss design and implementation details. We then present three use cases (named entity recognition, reading comprehension, and clickbait detection) that show the benefits of using the tool for model evaluation. CrossCheck allows data scientists to make informed decisions to choose between multiple models, identify when the models are correct and for which examples, investigate whether the models are making the same mistakes as humans, evaluate models' generalizability and highlight models' limitations, strengths and weaknesses. Furthermore, CrossCheck is implemented as a Jupyter widget, which allows rapid and convenient integration into data scientists' model development workflows.

PDF Abstract NAACL (DaSH) 2021 PDF NAACL (DaSH) 2021 Abstract

Datasets