When black box algorithms are (not) appropriate: a principled prediction-problem ontology

21 Jan 2020  ·  Jordan Rodu, Michael Baiocchi ·

In the 1980s a new, extraordinarily productive way of reasoning about algorithms emerged. In this paper, we introduce the term "outcome reasoning" to refer to this form of reasoning. Though outcome reasoning has come to dominate areas of data science, it has been under-discussed and its impact under-appreciated. For example, outcome reasoning is the primary way we reason about whether ``black box'' algorithms are performing well. In this paper we analyze outcome reasoning's most common form (i.e., as "the common task framework") and its limitations. We discuss why a large class of prediction-problems are inappropriate for outcome reasoning. As an example, we find the common task framework does not provide a foundation for the deployment of an algorithm in a real world situation. Building off of its core features, we identify a class of problems where this new form of reasoning can be used in deployment. We purposefully develop a novel framework so both technical and non-technical people can discuss and identify key features of their prediction problem and whether or not it is suitable for outcome reasoning.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Other Statistics

Datasets


  Add Datasets introduced or used in this paper