MetaCOG: Learning a Metacognition to Recover What Objects Are Actually There

Humans not only form representations about the world based on what we see, but also learn meta-cognitive representations about how our own vision works. This enables us to recognize when our vision is unreliable (e.g., when we realize that we are experiencing a visual illusion) and enables us to question what we see. Inspired by this human capacity, we present MetaCOG: a model that increases the robustness of object detectors by learning representations of their reliability, and does so without feedback. Specifically, MetaCOG is a hierarchical probabilistic model that expresses a joint distribution over the objects in a 3D scene and the outputs produced by a detector. When paired with an off-the-shelf object detector, MetaCOG takes detections as input and infers the detector's tendencies to miss objects of certain categories and to hallucinate objects that are not actually present, all without access to ground-truth object labels. When paired with three modern neural object detectors, MetaCOG learns useful and accurate meta-cognitive representations, resulting in improved performance on the detection task. Additionally, we show that MetaCOG is robust to varying levels of error in the detections. Our results are a proof-of-concept for a novel approach to the problem of correcting a faulty vision system's errors. The model code, datasets, results, and demos are available: https://osf.io/8b9qt/?view_only=8c1b1c412c6b4e1697e3c7859be2fce6

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods