Reflecting After Learning for Understanding

18 Oct 2019  ·  Lee Martie, Mohammad Arif Ul Alam, Gaoyuan Zhang, Ryan R. Anderson ·

Today, image classification is a common way for systems to process visual content. Although neural network approaches to classification have seen great progress in reducing error rates, it is not clear what this means for a cognitive system that needs to make sense of the multiple and competing predictions from its own classifiers. As a step to address this, we present a novel framework that uses meta-reasoning and meta-operations to unify predictions into abstractions, properties, or relationships. Using the framework on images from ImageNet, we demonstrate systems that unify 41% to 46% of predictions in general and unify 67% to 75% of predictions when the systems can explain their conceptual differences. We also demonstrate a system in "the wild" by feeding live video images through it and show it unifying 51% of predictions in general and 69% of predictions when their differences can be explained conceptually by the system. In a survey given to 24 participants, we found that 87% of the unified predictions describe their corresponding images.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here